00:00:00.001 Started by upstream project "autotest-per-patch" build number 126168 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.074 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.096 Using shallow fetch with depth 1 00:00:00.096 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.096 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.159 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.159 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.736 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.748 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.761 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.761 > git config core.sparsecheckout # timeout=10 00:00:03.771 > git read-tree -mu HEAD # timeout=10 00:00:03.788 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.809 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.809 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.883 [Pipeline] Start of Pipeline 00:00:03.898 [Pipeline] library 00:00:03.900 Loading library shm_lib@master 00:00:03.900 Library shm_lib@master is cached. Copying from home. 00:00:03.915 [Pipeline] node 00:00:03.922 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.926 [Pipeline] { 00:00:03.937 [Pipeline] catchError 00:00:03.938 [Pipeline] { 00:00:03.947 [Pipeline] wrap 00:00:03.955 [Pipeline] { 00:00:03.963 [Pipeline] stage 00:00:03.964 [Pipeline] { (Prologue) 00:00:04.134 [Pipeline] sh 00:00:04.418 + logger -p user.info -t JENKINS-CI 00:00:04.433 [Pipeline] echo 00:00:04.434 Node: WFP16 00:00:04.441 [Pipeline] sh 00:00:04.736 [Pipeline] setCustomBuildProperty 00:00:04.746 [Pipeline] echo 00:00:04.747 Cleanup processes 00:00:04.752 [Pipeline] sh 00:00:05.032 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.032 2468711 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.045 [Pipeline] sh 00:00:05.326 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.326 ++ grep -v 'sudo pgrep' 00:00:05.326 ++ awk '{print $1}' 00:00:05.326 + sudo kill -9 00:00:05.326 + true 00:00:05.342 [Pipeline] cleanWs 00:00:05.353 [WS-CLEANUP] Deleting project workspace... 00:00:05.353 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.360 [WS-CLEANUP] done 00:00:05.364 [Pipeline] setCustomBuildProperty 00:00:05.378 [Pipeline] sh 00:00:05.663 + sudo git config --global --replace-all safe.directory '*' 00:00:05.755 [Pipeline] httpRequest 00:00:05.780 [Pipeline] echo 00:00:05.782 Sorcerer 10.211.164.101 is alive 00:00:05.789 [Pipeline] httpRequest 00:00:05.795 HttpMethod: GET 00:00:05.795 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.796 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.797 Response Code: HTTP/1.1 200 OK 00:00:05.797 Success: Status code 200 is in the accepted range: 200,404 00:00:05.797 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.516 [Pipeline] sh 00:00:06.797 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.811 [Pipeline] httpRequest 00:00:06.833 [Pipeline] echo 00:00:06.834 Sorcerer 10.211.164.101 is alive 00:00:06.841 [Pipeline] httpRequest 00:00:06.845 HttpMethod: GET 00:00:06.846 URL: http://10.211.164.101/packages/spdk_e858834416726384bbd70dbb5796a7993f997ccc.tar.gz 00:00:06.847 Sending request to url: http://10.211.164.101/packages/spdk_e858834416726384bbd70dbb5796a7993f997ccc.tar.gz 00:00:06.849 Response Code: HTTP/1.1 200 OK 00:00:06.850 Success: Status code 200 is in the accepted range: 200,404 00:00:06.851 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e858834416726384bbd70dbb5796a7993f997ccc.tar.gz 00:00:22.080 [Pipeline] sh 00:00:22.363 + tar --no-same-owner -xf spdk_e858834416726384bbd70dbb5796a7993f997ccc.tar.gz 00:00:26.571 [Pipeline] sh 00:00:26.852 + git -C spdk log --oneline -n5 00:00:26.852 e85883441 test/packaging: Zero out the rpath string 00:00:26.852 4f7c82d04 test/packaging: Remove rpath workarounds in tests 00:00:26.852 719d03c6a sock/uring: only register net impl if supported 00:00:26.852 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:26.852 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:26.863 [Pipeline] } 00:00:26.881 [Pipeline] // stage 00:00:26.887 [Pipeline] stage 00:00:26.889 [Pipeline] { (Prepare) 00:00:26.902 [Pipeline] writeFile 00:00:26.915 [Pipeline] sh 00:00:27.197 + logger -p user.info -t JENKINS-CI 00:00:27.209 [Pipeline] sh 00:00:27.492 + logger -p user.info -t JENKINS-CI 00:00:27.504 [Pipeline] sh 00:00:27.787 + cat autorun-spdk.conf 00:00:27.787 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.787 SPDK_TEST_NVMF=1 00:00:27.787 SPDK_TEST_NVME_CLI=1 00:00:27.787 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:27.787 SPDK_TEST_NVMF_NICS=e810 00:00:27.787 SPDK_TEST_VFIOUSER=1 00:00:27.787 SPDK_RUN_UBSAN=1 00:00:27.787 NET_TYPE=phy 00:00:27.794 RUN_NIGHTLY=0 00:00:27.799 [Pipeline] readFile 00:00:27.824 [Pipeline] withEnv 00:00:27.826 [Pipeline] { 00:00:27.838 [Pipeline] sh 00:00:28.122 + set -ex 00:00:28.123 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:28.123 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:28.123 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.123 ++ SPDK_TEST_NVMF=1 00:00:28.123 ++ SPDK_TEST_NVME_CLI=1 00:00:28.123 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.123 ++ SPDK_TEST_NVMF_NICS=e810 00:00:28.123 ++ SPDK_TEST_VFIOUSER=1 00:00:28.123 ++ SPDK_RUN_UBSAN=1 00:00:28.123 ++ NET_TYPE=phy 00:00:28.123 ++ RUN_NIGHTLY=0 00:00:28.123 + case $SPDK_TEST_NVMF_NICS in 00:00:28.123 + DRIVERS=ice 00:00:28.123 + [[ tcp == \r\d\m\a ]] 00:00:28.123 + [[ -n ice ]] 00:00:28.123 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:28.123 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:28.123 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:28.123 rmmod: ERROR: Module irdma is not currently loaded 00:00:28.123 rmmod: ERROR: Module i40iw is not currently loaded 00:00:28.123 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:28.123 + true 00:00:28.123 + for D in $DRIVERS 00:00:28.123 + sudo modprobe ice 00:00:28.123 + exit 0 00:00:28.133 [Pipeline] } 00:00:28.153 [Pipeline] // withEnv 00:00:28.159 [Pipeline] } 00:00:28.175 [Pipeline] // stage 00:00:28.187 [Pipeline] catchError 00:00:28.189 [Pipeline] { 00:00:28.205 [Pipeline] timeout 00:00:28.205 Timeout set to expire in 50 min 00:00:28.207 [Pipeline] { 00:00:28.223 [Pipeline] stage 00:00:28.225 [Pipeline] { (Tests) 00:00:28.242 [Pipeline] sh 00:00:28.526 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.526 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.526 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.527 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:28.527 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:28.527 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:28.527 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:28.527 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:28.527 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:28.527 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:28.527 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:28.527 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:28.527 + source /etc/os-release 00:00:28.527 ++ NAME='Fedora Linux' 00:00:28.527 ++ VERSION='38 (Cloud Edition)' 00:00:28.527 ++ ID=fedora 00:00:28.527 ++ VERSION_ID=38 00:00:28.527 ++ VERSION_CODENAME= 00:00:28.527 ++ PLATFORM_ID=platform:f38 00:00:28.527 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:28.527 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:28.527 ++ LOGO=fedora-logo-icon 00:00:28.527 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:28.527 ++ HOME_URL=https://fedoraproject.org/ 00:00:28.527 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:28.527 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:28.527 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:28.527 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:28.527 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:28.527 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:28.527 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:28.527 ++ SUPPORT_END=2024-05-14 00:00:28.527 ++ VARIANT='Cloud Edition' 00:00:28.527 ++ VARIANT_ID=cloud 00:00:28.527 + uname -a 00:00:28.527 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:28.527 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:31.064 Hugepages 00:00:31.064 node hugesize free / total 00:00:31.064 node0 1048576kB 0 / 0 00:00:31.064 node0 2048kB 0 / 0 00:00:31.064 node1 1048576kB 0 / 0 00:00:31.064 node1 2048kB 0 / 0 00:00:31.064 00:00:31.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:31.064 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:31.064 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:31.064 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:31.064 + rm -f /tmp/spdk-ld-path 00:00:31.064 + source autorun-spdk.conf 00:00:31.064 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.064 ++ SPDK_TEST_NVMF=1 00:00:31.064 ++ SPDK_TEST_NVME_CLI=1 00:00:31.064 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.064 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.064 ++ SPDK_TEST_VFIOUSER=1 00:00:31.064 ++ SPDK_RUN_UBSAN=1 00:00:31.064 ++ NET_TYPE=phy 00:00:31.064 ++ RUN_NIGHTLY=0 00:00:31.064 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:31.064 + [[ -n '' ]] 00:00:31.064 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.064 + for M in /var/spdk/build-*-manifest.txt 00:00:31.064 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:31.064 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:31.064 + for M in /var/spdk/build-*-manifest.txt 00:00:31.064 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:31.064 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:31.064 ++ uname 00:00:31.064 + [[ Linux == \L\i\n\u\x ]] 00:00:31.064 + sudo dmesg -T 00:00:31.332 + sudo dmesg --clear 00:00:31.332 + dmesg_pid=2469631 00:00:31.332 + [[ Fedora Linux == FreeBSD ]] 00:00:31.332 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.332 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.332 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:31.332 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:31.332 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:31.332 + [[ -x /usr/src/fio-static/fio ]] 00:00:31.332 + export FIO_BIN=/usr/src/fio-static/fio 00:00:31.332 + FIO_BIN=/usr/src/fio-static/fio 00:00:31.332 + sudo dmesg -Tw 00:00:31.332 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:31.332 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:31.332 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:31.332 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.332 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.332 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:31.332 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.332 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.332 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.332 Test configuration: 00:00:31.332 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.332 SPDK_TEST_NVMF=1 00:00:31.332 SPDK_TEST_NVME_CLI=1 00:00:31.332 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.332 SPDK_TEST_NVMF_NICS=e810 00:00:31.332 SPDK_TEST_VFIOUSER=1 00:00:31.332 SPDK_RUN_UBSAN=1 00:00:31.332 NET_TYPE=phy 00:00:31.332 RUN_NIGHTLY=0 11:16:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:31.332 11:16:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:31.332 11:16:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:31.332 11:16:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:31.332 11:16:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.332 11:16:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.332 11:16:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.332 11:16:05 -- paths/export.sh@5 -- $ export PATH 00:00:31.332 11:16:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.332 11:16:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:31.332 11:16:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:31.332 11:16:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721034965.XXXXXX 00:00:31.332 11:16:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721034965.3cL0Cx 00:00:31.332 11:16:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:31.332 11:16:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:31.332 11:16:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:31.333 11:16:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:31.333 11:16:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:31.333 11:16:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:31.333 11:16:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:31.333 11:16:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.333 11:16:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:31.333 11:16:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:31.333 11:16:05 -- pm/common@17 -- $ local monitor 00:00:31.333 11:16:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.333 11:16:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.333 11:16:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.333 11:16:05 -- pm/common@21 -- $ date +%s 00:00:31.333 11:16:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.333 11:16:05 -- pm/common@21 -- $ date +%s 00:00:31.333 11:16:05 -- pm/common@25 -- $ sleep 1 00:00:31.333 11:16:05 -- pm/common@21 -- $ date +%s 00:00:31.333 11:16:05 -- pm/common@21 -- $ date +%s 00:00:31.333 11:16:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034965 00:00:31.333 11:16:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034965 00:00:31.333 11:16:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034965 00:00:31.333 11:16:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034965 00:00:31.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034965_collect-vmstat.pm.log 00:00:31.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034965_collect-cpu-load.pm.log 00:00:31.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034965_collect-cpu-temp.pm.log 00:00:31.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034965_collect-bmc-pm.bmc.pm.log 00:00:32.313 11:16:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:32.313 11:16:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:32.313 11:16:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:32.313 11:16:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.313 11:16:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:32.313 Mon Jul 15 09:16:06 AM UTC 2024 00:00:32.313 11:16:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:32.313 v24.09-pre-204-ge85883441 00:00:32.313 11:16:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:32.313 11:16:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:32.313 11:16:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:32.313 11:16:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:32.313 11:16:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:32.313 11:16:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.313 ************************************ 00:00:32.313 START TEST ubsan 00:00:32.313 ************************************ 00:00:32.313 11:16:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:32.313 using ubsan 00:00:32.314 00:00:32.314 real 0m0.000s 00:00:32.314 user 0m0.000s 00:00:32.314 sys 0m0.000s 00:00:32.314 11:16:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:32.314 11:16:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:32.314 ************************************ 00:00:32.314 END TEST ubsan 00:00:32.314 ************************************ 00:00:32.572 11:16:06 -- common/autotest_common.sh@1142 -- $ return 0 00:00:32.572 11:16:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:32.572 11:16:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:32.572 11:16:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:32.572 11:16:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:32.572 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:32.572 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:32.830 Using 'verbs' RDMA provider 00:00:48.646 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:03.536 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:03.536 Creating mk/config.mk...done. 00:01:03.536 Creating mk/cc.flags.mk...done. 00:01:03.536 Type 'make' to build. 00:01:03.536 11:16:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:03.536 11:16:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:03.536 11:16:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:03.536 11:16:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.536 ************************************ 00:01:03.536 START TEST make 00:01:03.536 ************************************ 00:01:03.536 11:16:36 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:03.536 make[1]: Nothing to be done for 'all'. 00:01:03.536 The Meson build system 00:01:03.536 Version: 1.3.1 00:01:03.536 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:03.536 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.536 Build type: native build 00:01:03.536 Project name: libvfio-user 00:01:03.536 Project version: 0.0.1 00:01:03.536 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:03.536 C linker for the host machine: cc ld.bfd 2.39-16 00:01:03.536 Host machine cpu family: x86_64 00:01:03.536 Host machine cpu: x86_64 00:01:03.536 Run-time dependency threads found: YES 00:01:03.536 Library dl found: YES 00:01:03.536 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:03.536 Run-time dependency json-c found: YES 0.17 00:01:03.536 Run-time dependency cmocka found: YES 1.1.7 00:01:03.536 Program pytest-3 found: NO 00:01:03.536 Program flake8 found: NO 00:01:03.536 Program misspell-fixer found: NO 00:01:03.536 Program restructuredtext-lint found: NO 00:01:03.536 Program valgrind found: YES (/usr/bin/valgrind) 00:01:03.536 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.536 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.536 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.536 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.536 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:03.536 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:03.536 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.536 Build targets in project: 8 00:01:03.536 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:03.536 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:03.536 00:01:03.536 libvfio-user 0.0.1 00:01:03.536 00:01:03.536 User defined options 00:01:03.536 buildtype : debug 00:01:03.536 default_library: shared 00:01:03.536 libdir : /usr/local/lib 00:01:03.536 00:01:03.536 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.468 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.468 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:04.468 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:04.468 [3/37] Compiling C object samples/null.p/null.c.o 00:01:04.468 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:04.468 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:04.468 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:04.468 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:04.468 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:04.468 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:04.468 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:04.468 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:04.468 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:04.468 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:04.468 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:04.726 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:04.726 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:04.726 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:04.726 [18/37] Compiling C object samples/server.p/server.c.o 00:01:04.726 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:04.726 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:04.726 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:04.726 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:04.726 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:04.726 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:04.726 [25/37] Compiling C object samples/client.p/client.c.o 00:01:04.726 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:04.726 [27/37] Linking target samples/client 00:01:04.726 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:04.726 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:04.726 [30/37] Linking target test/unit_tests 00:01:04.726 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:04.983 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:04.983 [33/37] Linking target samples/server 00:01:04.983 [34/37] Linking target samples/null 00:01:04.983 [35/37] Linking target samples/lspci 00:01:04.983 [36/37] Linking target samples/gpio-pci-idio-16 00:01:04.983 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:04.983 INFO: autodetecting backend as ninja 00:01:04.983 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.983 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.549 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:05.549 ninja: no work to do. 00:01:10.815 The Meson build system 00:01:10.815 Version: 1.3.1 00:01:10.815 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:10.815 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:10.815 Build type: native build 00:01:10.815 Program cat found: YES (/usr/bin/cat) 00:01:10.815 Project name: DPDK 00:01:10.815 Project version: 24.03.0 00:01:10.815 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:10.815 C linker for the host machine: cc ld.bfd 2.39-16 00:01:10.815 Host machine cpu family: x86_64 00:01:10.815 Host machine cpu: x86_64 00:01:10.815 Message: ## Building in Developer Mode ## 00:01:10.815 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:10.815 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:10.815 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:10.815 Program python3 found: YES (/usr/bin/python3) 00:01:10.815 Program cat found: YES (/usr/bin/cat) 00:01:10.815 Compiler for C supports arguments -march=native: YES 00:01:10.815 Checking for size of "void *" : 8 00:01:10.815 Checking for size of "void *" : 8 (cached) 00:01:10.815 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:10.815 Library m found: YES 00:01:10.815 Library numa found: YES 00:01:10.815 Has header "numaif.h" : YES 00:01:10.815 Library fdt found: NO 00:01:10.815 Library execinfo found: NO 00:01:10.815 Has header "execinfo.h" : YES 00:01:10.815 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:10.815 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:10.815 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:10.815 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:10.815 Run-time dependency openssl found: YES 3.0.9 00:01:10.815 Run-time dependency libpcap found: YES 1.10.4 00:01:10.815 Has header "pcap.h" with dependency libpcap: YES 00:01:10.815 Compiler for C supports arguments -Wcast-qual: YES 00:01:10.815 Compiler for C supports arguments -Wdeprecated: YES 00:01:10.815 Compiler for C supports arguments -Wformat: YES 00:01:10.815 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:10.815 Compiler for C supports arguments -Wformat-security: NO 00:01:10.815 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:10.815 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:10.815 Compiler for C supports arguments -Wnested-externs: YES 00:01:10.815 Compiler for C supports arguments -Wold-style-definition: YES 00:01:10.815 Compiler for C supports arguments -Wpointer-arith: YES 00:01:10.815 Compiler for C supports arguments -Wsign-compare: YES 00:01:10.815 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:10.815 Compiler for C supports arguments -Wundef: YES 00:01:10.815 Compiler for C supports arguments -Wwrite-strings: YES 00:01:10.815 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:10.815 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:10.815 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:10.815 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:10.815 Program objdump found: YES (/usr/bin/objdump) 00:01:10.815 Compiler for C supports arguments -mavx512f: YES 00:01:10.815 Checking if "AVX512 checking" compiles: YES 00:01:10.815 Fetching value of define "__SSE4_2__" : 1 00:01:10.815 Fetching value of define "__AES__" : 1 00:01:10.815 Fetching value of define "__AVX__" : 1 00:01:10.815 Fetching value of define "__AVX2__" : 1 00:01:10.815 Fetching value of define "__AVX512BW__" : 1 00:01:10.815 Fetching value of define "__AVX512CD__" : 1 00:01:10.815 Fetching value of define "__AVX512DQ__" : 1 00:01:10.815 Fetching value of define "__AVX512F__" : 1 00:01:10.815 Fetching value of define "__AVX512VL__" : 1 00:01:10.815 Fetching value of define "__PCLMUL__" : 1 00:01:10.815 Fetching value of define "__RDRND__" : 1 00:01:10.815 Fetching value of define "__RDSEED__" : 1 00:01:10.815 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:10.815 Fetching value of define "__znver1__" : (undefined) 00:01:10.815 Fetching value of define "__znver2__" : (undefined) 00:01:10.815 Fetching value of define "__znver3__" : (undefined) 00:01:10.815 Fetching value of define "__znver4__" : (undefined) 00:01:10.815 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:10.815 Message: lib/log: Defining dependency "log" 00:01:10.815 Message: lib/kvargs: Defining dependency "kvargs" 00:01:10.815 Message: lib/telemetry: Defining dependency "telemetry" 00:01:10.815 Checking for function "getentropy" : NO 00:01:10.815 Message: lib/eal: Defining dependency "eal" 00:01:10.815 Message: lib/ring: Defining dependency "ring" 00:01:10.815 Message: lib/rcu: Defining dependency "rcu" 00:01:10.815 Message: lib/mempool: Defining dependency "mempool" 00:01:10.815 Message: lib/mbuf: Defining dependency "mbuf" 00:01:10.815 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:10.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:10.815 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:10.815 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:10.815 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:10.815 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:10.815 Compiler for C supports arguments -mpclmul: YES 00:01:10.815 Compiler for C supports arguments -maes: YES 00:01:10.815 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:10.815 Compiler for C supports arguments -mavx512bw: YES 00:01:10.815 Compiler for C supports arguments -mavx512dq: YES 00:01:10.815 Compiler for C supports arguments -mavx512vl: YES 00:01:10.815 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:10.815 Compiler for C supports arguments -mavx2: YES 00:01:10.815 Compiler for C supports arguments -mavx: YES 00:01:10.815 Message: lib/net: Defining dependency "net" 00:01:10.816 Message: lib/meter: Defining dependency "meter" 00:01:10.816 Message: lib/ethdev: Defining dependency "ethdev" 00:01:10.816 Message: lib/pci: Defining dependency "pci" 00:01:10.816 Message: lib/cmdline: Defining dependency "cmdline" 00:01:10.816 Message: lib/hash: Defining dependency "hash" 00:01:10.816 Message: lib/timer: Defining dependency "timer" 00:01:10.816 Message: lib/compressdev: Defining dependency "compressdev" 00:01:10.816 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:10.816 Message: lib/dmadev: Defining dependency "dmadev" 00:01:10.816 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:10.816 Message: lib/power: Defining dependency "power" 00:01:10.816 Message: lib/reorder: Defining dependency "reorder" 00:01:10.816 Message: lib/security: Defining dependency "security" 00:01:10.816 Has header "linux/userfaultfd.h" : YES 00:01:10.816 Has header "linux/vduse.h" : YES 00:01:10.816 Message: lib/vhost: Defining dependency "vhost" 00:01:10.816 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:10.816 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:10.816 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:10.816 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:10.816 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:10.816 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:10.816 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:10.816 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:10.816 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:10.816 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:10.816 Program doxygen found: YES (/usr/bin/doxygen) 00:01:10.816 Configuring doxy-api-html.conf using configuration 00:01:10.816 Configuring doxy-api-man.conf using configuration 00:01:10.816 Program mandb found: YES (/usr/bin/mandb) 00:01:10.816 Program sphinx-build found: NO 00:01:10.816 Configuring rte_build_config.h using configuration 00:01:10.816 Message: 00:01:10.816 ================= 00:01:10.816 Applications Enabled 00:01:10.816 ================= 00:01:10.816 00:01:10.816 apps: 00:01:10.816 00:01:10.816 00:01:10.816 Message: 00:01:10.816 ================= 00:01:10.816 Libraries Enabled 00:01:10.816 ================= 00:01:10.816 00:01:10.816 libs: 00:01:10.816 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:10.816 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:10.816 cryptodev, dmadev, power, reorder, security, vhost, 00:01:10.816 00:01:10.816 Message: 00:01:10.816 =============== 00:01:10.816 Drivers Enabled 00:01:10.816 =============== 00:01:10.816 00:01:10.816 common: 00:01:10.816 00:01:10.816 bus: 00:01:10.816 pci, vdev, 00:01:10.816 mempool: 00:01:10.816 ring, 00:01:10.816 dma: 00:01:10.816 00:01:10.816 net: 00:01:10.816 00:01:10.816 crypto: 00:01:10.816 00:01:10.816 compress: 00:01:10.816 00:01:10.816 vdpa: 00:01:10.816 00:01:10.816 00:01:10.816 Message: 00:01:10.816 ================= 00:01:10.816 Content Skipped 00:01:10.816 ================= 00:01:10.816 00:01:10.816 apps: 00:01:10.816 dumpcap: explicitly disabled via build config 00:01:10.816 graph: explicitly disabled via build config 00:01:10.816 pdump: explicitly disabled via build config 00:01:10.816 proc-info: explicitly disabled via build config 00:01:10.816 test-acl: explicitly disabled via build config 00:01:10.816 test-bbdev: explicitly disabled via build config 00:01:10.816 test-cmdline: explicitly disabled via build config 00:01:10.816 test-compress-perf: explicitly disabled via build config 00:01:10.816 test-crypto-perf: explicitly disabled via build config 00:01:10.816 test-dma-perf: explicitly disabled via build config 00:01:10.816 test-eventdev: explicitly disabled via build config 00:01:10.816 test-fib: explicitly disabled via build config 00:01:10.816 test-flow-perf: explicitly disabled via build config 00:01:10.816 test-gpudev: explicitly disabled via build config 00:01:10.816 test-mldev: explicitly disabled via build config 00:01:10.816 test-pipeline: explicitly disabled via build config 00:01:10.816 test-pmd: explicitly disabled via build config 00:01:10.816 test-regex: explicitly disabled via build config 00:01:10.816 test-sad: explicitly disabled via build config 00:01:10.816 test-security-perf: explicitly disabled via build config 00:01:10.816 00:01:10.816 libs: 00:01:10.816 argparse: explicitly disabled via build config 00:01:10.816 metrics: explicitly disabled via build config 00:01:10.816 acl: explicitly disabled via build config 00:01:10.816 bbdev: explicitly disabled via build config 00:01:10.816 bitratestats: explicitly disabled via build config 00:01:10.816 bpf: explicitly disabled via build config 00:01:10.816 cfgfile: explicitly disabled via build config 00:01:10.816 distributor: explicitly disabled via build config 00:01:10.816 efd: explicitly disabled via build config 00:01:10.816 eventdev: explicitly disabled via build config 00:01:10.816 dispatcher: explicitly disabled via build config 00:01:10.816 gpudev: explicitly disabled via build config 00:01:10.816 gro: explicitly disabled via build config 00:01:10.816 gso: explicitly disabled via build config 00:01:10.816 ip_frag: explicitly disabled via build config 00:01:10.816 jobstats: explicitly disabled via build config 00:01:10.816 latencystats: explicitly disabled via build config 00:01:10.816 lpm: explicitly disabled via build config 00:01:10.816 member: explicitly disabled via build config 00:01:10.816 pcapng: explicitly disabled via build config 00:01:10.816 rawdev: explicitly disabled via build config 00:01:10.816 regexdev: explicitly disabled via build config 00:01:10.816 mldev: explicitly disabled via build config 00:01:10.816 rib: explicitly disabled via build config 00:01:10.816 sched: explicitly disabled via build config 00:01:10.816 stack: explicitly disabled via build config 00:01:10.816 ipsec: explicitly disabled via build config 00:01:10.816 pdcp: explicitly disabled via build config 00:01:10.816 fib: explicitly disabled via build config 00:01:10.816 port: explicitly disabled via build config 00:01:10.816 pdump: explicitly disabled via build config 00:01:10.816 table: explicitly disabled via build config 00:01:10.816 pipeline: explicitly disabled via build config 00:01:10.816 graph: explicitly disabled via build config 00:01:10.816 node: explicitly disabled via build config 00:01:10.816 00:01:10.816 drivers: 00:01:10.816 common/cpt: not in enabled drivers build config 00:01:10.816 common/dpaax: not in enabled drivers build config 00:01:10.816 common/iavf: not in enabled drivers build config 00:01:10.816 common/idpf: not in enabled drivers build config 00:01:10.816 common/ionic: not in enabled drivers build config 00:01:10.816 common/mvep: not in enabled drivers build config 00:01:10.816 common/octeontx: not in enabled drivers build config 00:01:10.816 bus/auxiliary: not in enabled drivers build config 00:01:10.816 bus/cdx: not in enabled drivers build config 00:01:10.816 bus/dpaa: not in enabled drivers build config 00:01:10.816 bus/fslmc: not in enabled drivers build config 00:01:10.816 bus/ifpga: not in enabled drivers build config 00:01:10.816 bus/platform: not in enabled drivers build config 00:01:10.816 bus/uacce: not in enabled drivers build config 00:01:10.816 bus/vmbus: not in enabled drivers build config 00:01:10.816 common/cnxk: not in enabled drivers build config 00:01:10.816 common/mlx5: not in enabled drivers build config 00:01:10.816 common/nfp: not in enabled drivers build config 00:01:10.816 common/nitrox: not in enabled drivers build config 00:01:10.816 common/qat: not in enabled drivers build config 00:01:10.816 common/sfc_efx: not in enabled drivers build config 00:01:10.816 mempool/bucket: not in enabled drivers build config 00:01:10.816 mempool/cnxk: not in enabled drivers build config 00:01:10.816 mempool/dpaa: not in enabled drivers build config 00:01:10.816 mempool/dpaa2: not in enabled drivers build config 00:01:10.816 mempool/octeontx: not in enabled drivers build config 00:01:10.816 mempool/stack: not in enabled drivers build config 00:01:10.816 dma/cnxk: not in enabled drivers build config 00:01:10.816 dma/dpaa: not in enabled drivers build config 00:01:10.816 dma/dpaa2: not in enabled drivers build config 00:01:10.816 dma/hisilicon: not in enabled drivers build config 00:01:10.816 dma/idxd: not in enabled drivers build config 00:01:10.816 dma/ioat: not in enabled drivers build config 00:01:10.816 dma/skeleton: not in enabled drivers build config 00:01:10.816 net/af_packet: not in enabled drivers build config 00:01:10.816 net/af_xdp: not in enabled drivers build config 00:01:10.816 net/ark: not in enabled drivers build config 00:01:10.816 net/atlantic: not in enabled drivers build config 00:01:10.816 net/avp: not in enabled drivers build config 00:01:10.816 net/axgbe: not in enabled drivers build config 00:01:10.816 net/bnx2x: not in enabled drivers build config 00:01:10.816 net/bnxt: not in enabled drivers build config 00:01:10.816 net/bonding: not in enabled drivers build config 00:01:10.816 net/cnxk: not in enabled drivers build config 00:01:10.816 net/cpfl: not in enabled drivers build config 00:01:10.816 net/cxgbe: not in enabled drivers build config 00:01:10.816 net/dpaa: not in enabled drivers build config 00:01:10.816 net/dpaa2: not in enabled drivers build config 00:01:10.816 net/e1000: not in enabled drivers build config 00:01:10.816 net/ena: not in enabled drivers build config 00:01:10.816 net/enetc: not in enabled drivers build config 00:01:10.816 net/enetfec: not in enabled drivers build config 00:01:10.816 net/enic: not in enabled drivers build config 00:01:10.816 net/failsafe: not in enabled drivers build config 00:01:10.816 net/fm10k: not in enabled drivers build config 00:01:10.816 net/gve: not in enabled drivers build config 00:01:10.816 net/hinic: not in enabled drivers build config 00:01:10.816 net/hns3: not in enabled drivers build config 00:01:10.816 net/i40e: not in enabled drivers build config 00:01:10.816 net/iavf: not in enabled drivers build config 00:01:10.816 net/ice: not in enabled drivers build config 00:01:10.816 net/idpf: not in enabled drivers build config 00:01:10.816 net/igc: not in enabled drivers build config 00:01:10.816 net/ionic: not in enabled drivers build config 00:01:10.816 net/ipn3ke: not in enabled drivers build config 00:01:10.816 net/ixgbe: not in enabled drivers build config 00:01:10.816 net/mana: not in enabled drivers build config 00:01:10.816 net/memif: not in enabled drivers build config 00:01:10.816 net/mlx4: not in enabled drivers build config 00:01:10.816 net/mlx5: not in enabled drivers build config 00:01:10.816 net/mvneta: not in enabled drivers build config 00:01:10.816 net/mvpp2: not in enabled drivers build config 00:01:10.816 net/netvsc: not in enabled drivers build config 00:01:10.816 net/nfb: not in enabled drivers build config 00:01:10.816 net/nfp: not in enabled drivers build config 00:01:10.816 net/ngbe: not in enabled drivers build config 00:01:10.816 net/null: not in enabled drivers build config 00:01:10.816 net/octeontx: not in enabled drivers build config 00:01:10.817 net/octeon_ep: not in enabled drivers build config 00:01:10.817 net/pcap: not in enabled drivers build config 00:01:10.817 net/pfe: not in enabled drivers build config 00:01:10.817 net/qede: not in enabled drivers build config 00:01:10.817 net/ring: not in enabled drivers build config 00:01:10.817 net/sfc: not in enabled drivers build config 00:01:10.817 net/softnic: not in enabled drivers build config 00:01:10.817 net/tap: not in enabled drivers build config 00:01:10.817 net/thunderx: not in enabled drivers build config 00:01:10.817 net/txgbe: not in enabled drivers build config 00:01:10.817 net/vdev_netvsc: not in enabled drivers build config 00:01:10.817 net/vhost: not in enabled drivers build config 00:01:10.817 net/virtio: not in enabled drivers build config 00:01:10.817 net/vmxnet3: not in enabled drivers build config 00:01:10.817 raw/*: missing internal dependency, "rawdev" 00:01:10.817 crypto/armv8: not in enabled drivers build config 00:01:10.817 crypto/bcmfs: not in enabled drivers build config 00:01:10.817 crypto/caam_jr: not in enabled drivers build config 00:01:10.817 crypto/ccp: not in enabled drivers build config 00:01:10.817 crypto/cnxk: not in enabled drivers build config 00:01:10.817 crypto/dpaa_sec: not in enabled drivers build config 00:01:10.817 crypto/dpaa2_sec: not in enabled drivers build config 00:01:10.817 crypto/ipsec_mb: not in enabled drivers build config 00:01:10.817 crypto/mlx5: not in enabled drivers build config 00:01:10.817 crypto/mvsam: not in enabled drivers build config 00:01:10.817 crypto/nitrox: not in enabled drivers build config 00:01:10.817 crypto/null: not in enabled drivers build config 00:01:10.817 crypto/octeontx: not in enabled drivers build config 00:01:10.817 crypto/openssl: not in enabled drivers build config 00:01:10.817 crypto/scheduler: not in enabled drivers build config 00:01:10.817 crypto/uadk: not in enabled drivers build config 00:01:10.817 crypto/virtio: not in enabled drivers build config 00:01:10.817 compress/isal: not in enabled drivers build config 00:01:10.817 compress/mlx5: not in enabled drivers build config 00:01:10.817 compress/nitrox: not in enabled drivers build config 00:01:10.817 compress/octeontx: not in enabled drivers build config 00:01:10.817 compress/zlib: not in enabled drivers build config 00:01:10.817 regex/*: missing internal dependency, "regexdev" 00:01:10.817 ml/*: missing internal dependency, "mldev" 00:01:10.817 vdpa/ifc: not in enabled drivers build config 00:01:10.817 vdpa/mlx5: not in enabled drivers build config 00:01:10.817 vdpa/nfp: not in enabled drivers build config 00:01:10.817 vdpa/sfc: not in enabled drivers build config 00:01:10.817 event/*: missing internal dependency, "eventdev" 00:01:10.817 baseband/*: missing internal dependency, "bbdev" 00:01:10.817 gpu/*: missing internal dependency, "gpudev" 00:01:10.817 00:01:10.817 00:01:11.077 Build targets in project: 85 00:01:11.077 00:01:11.077 DPDK 24.03.0 00:01:11.077 00:01:11.077 User defined options 00:01:11.077 buildtype : debug 00:01:11.077 default_library : shared 00:01:11.077 libdir : lib 00:01:11.077 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.077 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:11.077 c_link_args : 00:01:11.077 cpu_instruction_set: native 00:01:11.077 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:11.077 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:11.077 enable_docs : false 00:01:11.077 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:11.077 enable_kmods : false 00:01:11.077 max_lcores : 128 00:01:11.077 tests : false 00:01:11.077 00:01:11.077 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.661 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.661 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.661 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.661 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.661 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.927 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:11.927 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.927 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:11.927 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.927 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.927 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.927 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.927 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:11.927 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:11.927 [14/268] Linking static target lib/librte_kvargs.a 00:01:11.927 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.927 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:11.927 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:11.927 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.927 [19/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.927 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:11.927 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:11.927 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:11.927 [23/268] Linking static target lib/librte_pci.a 00:01:11.927 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:11.927 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:11.927 [26/268] Linking static target lib/librte_log.a 00:01:11.927 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:11.927 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:11.927 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:12.186 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:12.186 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:12.186 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:12.186 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:12.186 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:12.186 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:12.186 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:12.444 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.444 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.444 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:12.444 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:12.444 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.444 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.444 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.444 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.444 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.444 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:12.444 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.444 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.444 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.444 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.444 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.444 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:12.444 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.444 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.444 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:12.444 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:12.444 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.444 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:12.444 [59/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.444 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.444 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:12.444 [62/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:12.444 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.444 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:12.444 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.444 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.444 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:12.444 [68/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.444 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:12.444 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.444 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.444 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.444 [73/268] Linking static target lib/librte_telemetry.a 00:01:12.444 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.444 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.444 [76/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:12.444 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.444 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:12.444 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:12.444 [80/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:12.444 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.444 [82/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:12.444 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.444 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:12.444 [85/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:12.444 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.444 [87/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:12.444 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.444 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.444 [90/268] Linking static target lib/librte_meter.a 00:01:12.444 [91/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:12.444 [92/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:12.444 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:12.444 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.444 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:12.444 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:12.444 [97/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.444 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.444 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.444 [100/268] Linking static target lib/librte_ring.a 00:01:12.445 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:12.445 [102/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:12.445 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:12.445 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:12.445 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:12.445 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.702 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:12.702 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:12.702 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:12.702 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.702 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:12.702 [112/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.702 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:12.702 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:12.702 [115/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:12.702 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:12.703 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.703 [118/268] Linking static target lib/librte_net.a 00:01:12.703 [119/268] Linking static target lib/librte_mempool.a 00:01:12.703 [120/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.703 [121/268] Linking static target lib/librte_timer.a 00:01:12.703 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:12.703 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:12.703 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.703 [125/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.703 [126/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.703 [127/268] Linking static target lib/librte_rcu.a 00:01:12.703 [128/268] Linking static target lib/librte_cmdline.a 00:01:12.703 [129/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.703 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.703 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:12.703 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:12.703 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:12.703 [134/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:12.703 [135/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.703 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.703 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:12.703 [138/268] Linking static target lib/librte_dmadev.a 00:01:12.703 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.703 [140/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.703 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.703 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.703 [143/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.703 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.703 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.703 [146/268] Linking static target lib/librte_compressdev.a 00:01:12.703 [147/268] Linking target lib/librte_log.so.24.1 00:01:12.703 [148/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.703 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.703 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.703 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.961 [152/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.961 [153/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.961 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.961 [155/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.961 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.961 [157/268] Linking static target lib/librte_mbuf.a 00:01:12.961 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.961 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.961 [160/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.961 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.961 [162/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.961 [163/268] Linking static target lib/librte_reorder.a 00:01:12.961 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.961 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:12.961 [166/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.961 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.961 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.961 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.961 [170/268] Linking target lib/librte_kvargs.so.24.1 00:01:12.961 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.961 [172/268] Linking static target lib/librte_power.a 00:01:12.961 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.961 [174/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.961 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.961 [176/268] Linking target lib/librte_telemetry.so.24.1 00:01:12.961 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.961 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.961 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.961 [180/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:12.961 [181/268] Linking static target lib/librte_hash.a 00:01:12.961 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.961 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.961 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.961 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.961 [186/268] Linking static target lib/librte_security.a 00:01:13.220 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.220 [188/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.220 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.220 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:13.220 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.220 [192/268] Linking static target lib/librte_eal.a 00:01:13.220 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.220 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:13.220 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.220 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:13.220 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.220 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.220 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.220 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:13.220 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.220 [202/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.220 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.220 [204/268] Linking static target drivers/librte_bus_vdev.a 00:01:13.478 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:13.478 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.478 [207/268] Linking static target lib/librte_cryptodev.a 00:01:13.478 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.478 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.478 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.478 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.478 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:13.478 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.478 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.478 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.478 [216/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.478 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:13.736 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.736 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.736 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.736 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.736 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.993 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.993 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.250 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.184 [226/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.442 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:15.442 [228/268] Linking static target lib/librte_vhost.a 00:01:15.442 [229/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:15.700 [230/268] Linking static target lib/librte_ethdev.a 00:01:17.603 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.166 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.166 [233/268] Linking target lib/librte_eal.so.24.1 00:01:24.166 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:24.166 [235/268] Linking target lib/librte_pci.so.24.1 00:01:24.166 [236/268] Linking target lib/librte_meter.so.24.1 00:01:24.166 [237/268] Linking target lib/librte_timer.so.24.1 00:01:24.166 [238/268] Linking target lib/librte_ring.so.24.1 00:01:24.166 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:24.166 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:24.166 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:24.166 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:24.166 [243/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.166 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:24.166 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:24.166 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:24.166 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:24.166 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:24.166 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:24.425 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:24.425 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:24.425 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:24.425 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:24.684 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:24.684 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:24.684 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:24.684 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:24.684 [258/268] Linking target lib/librte_net.so.24.1 00:01:24.684 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:24.684 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:24.942 [261/268] Linking target lib/librte_hash.so.24.1 00:01:24.943 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:24.943 [263/268] Linking target lib/librte_security.so.24.1 00:01:24.943 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:24.943 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:24.943 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:25.201 [267/268] Linking target lib/librte_power.so.24.1 00:01:25.201 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:25.201 INFO: autodetecting backend as ninja 00:01:25.201 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:26.137 CC lib/log/log.o 00:01:26.137 CC lib/log/log_flags.o 00:01:26.137 CC lib/log/log_deprecated.o 00:01:26.137 CC lib/ut_mock/mock.o 00:01:26.137 CC lib/ut/ut.o 00:01:26.396 LIB libspdk_log.a 00:01:26.396 LIB libspdk_ut_mock.a 00:01:26.396 LIB libspdk_ut.a 00:01:26.396 SO libspdk_log.so.7.0 00:01:26.396 SO libspdk_ut_mock.so.6.0 00:01:26.396 SO libspdk_ut.so.2.0 00:01:26.654 SYMLINK libspdk_ut_mock.so 00:01:26.654 SYMLINK libspdk_log.so 00:01:26.654 SYMLINK libspdk_ut.so 00:01:26.912 CC lib/util/base64.o 00:01:26.912 CC lib/util/bit_array.o 00:01:26.912 CC lib/util/cpuset.o 00:01:26.912 CC lib/util/crc16.o 00:01:26.912 CC lib/util/crc32.o 00:01:26.912 CC lib/util/crc32c.o 00:01:26.912 CC lib/util/crc32_ieee.o 00:01:26.912 CC lib/util/crc64.o 00:01:26.912 CC lib/util/dif.o 00:01:26.912 CC lib/util/file.o 00:01:26.912 CC lib/util/fd.o 00:01:26.912 CC lib/util/hexlify.o 00:01:26.912 CC lib/ioat/ioat.o 00:01:26.912 CC lib/util/iov.o 00:01:26.912 CC lib/util/math.o 00:01:26.912 CC lib/util/pipe.o 00:01:26.912 CC lib/dma/dma.o 00:01:26.912 CC lib/util/strerror_tls.o 00:01:26.912 CC lib/util/string.o 00:01:26.912 CC lib/util/uuid.o 00:01:26.912 CC lib/util/fd_group.o 00:01:26.912 CXX lib/trace_parser/trace.o 00:01:26.912 CC lib/util/xor.o 00:01:26.912 CC lib/util/zipf.o 00:01:27.171 CC lib/vfio_user/host/vfio_user_pci.o 00:01:27.171 CC lib/vfio_user/host/vfio_user.o 00:01:27.171 LIB libspdk_dma.a 00:01:27.171 SO libspdk_dma.so.4.0 00:01:27.171 LIB libspdk_ioat.a 00:01:27.171 SYMLINK libspdk_dma.so 00:01:27.171 SO libspdk_ioat.so.7.0 00:01:27.431 SYMLINK libspdk_ioat.so 00:01:27.431 LIB libspdk_vfio_user.a 00:01:27.431 SO libspdk_vfio_user.so.5.0 00:01:27.431 LIB libspdk_util.a 00:01:27.431 SYMLINK libspdk_vfio_user.so 00:01:27.431 SO libspdk_util.so.9.1 00:01:27.690 SYMLINK libspdk_util.so 00:01:27.690 LIB libspdk_trace_parser.a 00:01:27.949 SO libspdk_trace_parser.so.5.0 00:01:27.949 SYMLINK libspdk_trace_parser.so 00:01:27.949 CC lib/json/json_parse.o 00:01:27.949 CC lib/json/json_util.o 00:01:27.949 CC lib/json/json_write.o 00:01:27.949 CC lib/vmd/vmd.o 00:01:27.949 CC lib/rdma_provider/common.o 00:01:27.949 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:27.949 CC lib/vmd/led.o 00:01:27.949 CC lib/idxd/idxd.o 00:01:27.949 CC lib/conf/conf.o 00:01:27.949 CC lib/idxd/idxd_user.o 00:01:27.949 CC lib/env_dpdk/env.o 00:01:27.949 CC lib/idxd/idxd_kernel.o 00:01:27.949 CC lib/env_dpdk/memory.o 00:01:27.949 CC lib/rdma_utils/rdma_utils.o 00:01:27.949 CC lib/env_dpdk/pci.o 00:01:27.949 CC lib/env_dpdk/init.o 00:01:27.949 CC lib/env_dpdk/threads.o 00:01:27.949 CC lib/env_dpdk/pci_ioat.o 00:01:27.949 CC lib/env_dpdk/pci_virtio.o 00:01:27.949 CC lib/env_dpdk/pci_vmd.o 00:01:27.949 CC lib/env_dpdk/pci_idxd.o 00:01:27.949 CC lib/env_dpdk/pci_event.o 00:01:27.949 CC lib/env_dpdk/sigbus_handler.o 00:01:27.949 CC lib/env_dpdk/pci_dpdk.o 00:01:27.949 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:27.949 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:28.207 LIB libspdk_conf.a 00:01:28.207 SO libspdk_conf.so.6.0 00:01:28.207 LIB libspdk_rdma_utils.a 00:01:28.465 LIB libspdk_json.a 00:01:28.465 SYMLINK libspdk_conf.so 00:01:28.465 SO libspdk_rdma_utils.so.1.0 00:01:28.465 SO libspdk_json.so.6.0 00:01:28.465 LIB libspdk_rdma_provider.a 00:01:28.465 SYMLINK libspdk_rdma_utils.so 00:01:28.465 SO libspdk_rdma_provider.so.6.0 00:01:28.465 SYMLINK libspdk_json.so 00:01:28.465 SYMLINK libspdk_rdma_provider.so 00:01:28.723 LIB libspdk_vmd.a 00:01:28.723 LIB libspdk_idxd.a 00:01:28.723 SO libspdk_vmd.so.6.0 00:01:28.723 CC lib/jsonrpc/jsonrpc_server.o 00:01:28.723 SO libspdk_idxd.so.12.0 00:01:28.723 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:28.723 CC lib/jsonrpc/jsonrpc_client.o 00:01:28.723 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:28.723 SYMLINK libspdk_vmd.so 00:01:28.723 SYMLINK libspdk_idxd.so 00:01:28.981 LIB libspdk_jsonrpc.a 00:01:28.981 SO libspdk_jsonrpc.so.6.0 00:01:29.238 SYMLINK libspdk_jsonrpc.so 00:01:29.496 LIB libspdk_env_dpdk.a 00:01:29.496 CC lib/rpc/rpc.o 00:01:29.496 SO libspdk_env_dpdk.so.14.1 00:01:29.496 LIB libspdk_rpc.a 00:01:29.755 SO libspdk_rpc.so.6.0 00:01:29.755 SYMLINK libspdk_env_dpdk.so 00:01:29.755 SYMLINK libspdk_rpc.so 00:01:30.014 CC lib/trace/trace.o 00:01:30.014 CC lib/trace/trace_flags.o 00:01:30.014 CC lib/trace/trace_rpc.o 00:01:30.014 CC lib/keyring/keyring.o 00:01:30.014 CC lib/notify/notify.o 00:01:30.014 CC lib/notify/notify_rpc.o 00:01:30.014 CC lib/keyring/keyring_rpc.o 00:01:30.304 LIB libspdk_notify.a 00:01:30.304 SO libspdk_notify.so.6.0 00:01:30.304 LIB libspdk_keyring.a 00:01:30.304 LIB libspdk_trace.a 00:01:30.304 SYMLINK libspdk_notify.so 00:01:30.304 SO libspdk_keyring.so.1.0 00:01:30.304 SO libspdk_trace.so.10.0 00:01:30.641 SYMLINK libspdk_keyring.so 00:01:30.641 SYMLINK libspdk_trace.so 00:01:30.913 CC lib/thread/thread.o 00:01:30.913 CC lib/thread/iobuf.o 00:01:30.913 CC lib/sock/sock.o 00:01:30.913 CC lib/sock/sock_rpc.o 00:01:31.171 LIB libspdk_sock.a 00:01:31.171 SO libspdk_sock.so.10.0 00:01:31.171 SYMLINK libspdk_sock.so 00:01:31.738 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:31.738 CC lib/nvme/nvme_ctrlr.o 00:01:31.738 CC lib/nvme/nvme_fabric.o 00:01:31.738 CC lib/nvme/nvme_ns_cmd.o 00:01:31.738 CC lib/nvme/nvme_ns.o 00:01:31.738 CC lib/nvme/nvme_pcie_common.o 00:01:31.738 CC lib/nvme/nvme_pcie.o 00:01:31.738 CC lib/nvme/nvme_qpair.o 00:01:31.738 CC lib/nvme/nvme.o 00:01:31.738 CC lib/nvme/nvme_quirks.o 00:01:31.738 CC lib/nvme/nvme_transport.o 00:01:31.738 CC lib/nvme/nvme_discovery.o 00:01:31.738 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:31.738 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:31.738 CC lib/nvme/nvme_tcp.o 00:01:31.738 CC lib/nvme/nvme_opal.o 00:01:31.738 CC lib/nvme/nvme_io_msg.o 00:01:31.738 CC lib/nvme/nvme_poll_group.o 00:01:31.738 CC lib/nvme/nvme_zns.o 00:01:31.738 CC lib/nvme/nvme_stubs.o 00:01:31.738 CC lib/nvme/nvme_auth.o 00:01:31.738 CC lib/nvme/nvme_cuse.o 00:01:31.738 CC lib/nvme/nvme_vfio_user.o 00:01:31.738 CC lib/nvme/nvme_rdma.o 00:01:32.304 LIB libspdk_thread.a 00:01:32.304 SO libspdk_thread.so.10.1 00:01:32.304 SYMLINK libspdk_thread.so 00:01:32.561 CC lib/blob/blobstore.o 00:01:32.561 CC lib/blob/request.o 00:01:32.561 CC lib/blob/zeroes.o 00:01:32.561 CC lib/blob/blob_bs_dev.o 00:01:32.561 CC lib/vfu_tgt/tgt_endpoint.o 00:01:32.561 CC lib/accel/accel.o 00:01:32.561 CC lib/vfu_tgt/tgt_rpc.o 00:01:32.561 CC lib/accel/accel_rpc.o 00:01:32.561 CC lib/accel/accel_sw.o 00:01:32.561 CC lib/init/json_config.o 00:01:32.561 CC lib/init/subsystem.o 00:01:32.561 CC lib/init/subsystem_rpc.o 00:01:32.561 CC lib/virtio/virtio.o 00:01:32.561 CC lib/init/rpc.o 00:01:32.561 CC lib/virtio/virtio_vhost_user.o 00:01:32.561 CC lib/virtio/virtio_vfio_user.o 00:01:32.561 CC lib/virtio/virtio_pci.o 00:01:32.819 LIB libspdk_init.a 00:01:32.819 SO libspdk_init.so.5.0 00:01:33.078 LIB libspdk_vfu_tgt.a 00:01:33.078 LIB libspdk_virtio.a 00:01:33.078 SO libspdk_vfu_tgt.so.3.0 00:01:33.078 SYMLINK libspdk_init.so 00:01:33.078 SO libspdk_virtio.so.7.0 00:01:33.078 SYMLINK libspdk_vfu_tgt.so 00:01:33.078 SYMLINK libspdk_virtio.so 00:01:33.336 CC lib/event/app.o 00:01:33.336 CC lib/event/reactor.o 00:01:33.336 CC lib/event/log_rpc.o 00:01:33.336 CC lib/event/app_rpc.o 00:01:33.336 CC lib/event/scheduler_static.o 00:01:33.595 LIB libspdk_accel.a 00:01:33.595 SO libspdk_accel.so.15.1 00:01:33.853 SYMLINK libspdk_accel.so 00:01:33.853 LIB libspdk_event.a 00:01:33.853 LIB libspdk_nvme.a 00:01:33.853 SO libspdk_event.so.14.0 00:01:33.853 SO libspdk_nvme.so.13.1 00:01:33.853 SYMLINK libspdk_event.so 00:01:34.111 CC lib/bdev/bdev.o 00:01:34.111 CC lib/bdev/bdev_rpc.o 00:01:34.111 CC lib/bdev/bdev_zone.o 00:01:34.111 CC lib/bdev/part.o 00:01:34.111 CC lib/bdev/scsi_nvme.o 00:01:34.370 SYMLINK libspdk_nvme.so 00:01:36.900 LIB libspdk_bdev.a 00:01:36.900 SO libspdk_bdev.so.15.1 00:01:36.900 SYMLINK libspdk_bdev.so 00:01:37.158 CC lib/nvmf/ctrlr.o 00:01:37.158 CC lib/nvmf/ctrlr_discovery.o 00:01:37.158 CC lib/nvmf/ctrlr_bdev.o 00:01:37.158 CC lib/nvmf/subsystem.o 00:01:37.158 CC lib/nvmf/nvmf.o 00:01:37.158 CC lib/nvmf/nvmf_rpc.o 00:01:37.158 CC lib/nvmf/transport.o 00:01:37.158 CC lib/scsi/lun.o 00:01:37.158 CC lib/scsi/dev.o 00:01:37.158 CC lib/nvmf/tcp.o 00:01:37.158 CC lib/nvmf/stubs.o 00:01:37.158 CC lib/scsi/scsi.o 00:01:37.158 CC lib/scsi/port.o 00:01:37.158 CC lib/ublk/ublk.o 00:01:37.158 CC lib/nvmf/mdns_server.o 00:01:37.158 CC lib/ublk/ublk_rpc.o 00:01:37.158 CC lib/nbd/nbd.o 00:01:37.158 CC lib/nvmf/vfio_user.o 00:01:37.158 CC lib/scsi/scsi_bdev.o 00:01:37.158 CC lib/scsi/scsi_pr.o 00:01:37.158 CC lib/nvmf/rdma.o 00:01:37.158 CC lib/scsi/scsi_rpc.o 00:01:37.158 CC lib/nbd/nbd_rpc.o 00:01:37.158 CC lib/nvmf/auth.o 00:01:37.158 CC lib/scsi/task.o 00:01:37.158 CC lib/ftl/ftl_init.o 00:01:37.158 CC lib/ftl/ftl_core.o 00:01:37.158 CC lib/ftl/ftl_layout.o 00:01:37.158 CC lib/ftl/ftl_debug.o 00:01:37.158 CC lib/ftl/ftl_io.o 00:01:37.158 CC lib/ftl/ftl_sb.o 00:01:37.158 CC lib/ftl/ftl_l2p.o 00:01:37.158 CC lib/ftl/ftl_nv_cache.o 00:01:37.158 CC lib/ftl/ftl_l2p_flat.o 00:01:37.158 CC lib/ftl/ftl_band.o 00:01:37.158 CC lib/ftl/ftl_band_ops.o 00:01:37.158 CC lib/ftl/ftl_writer.o 00:01:37.158 CC lib/ftl/ftl_rq.o 00:01:37.158 CC lib/ftl/ftl_reloc.o 00:01:37.158 CC lib/ftl/ftl_l2p_cache.o 00:01:37.158 CC lib/ftl/ftl_p2l.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:37.158 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:37.158 CC lib/ftl/utils/ftl_conf.o 00:01:37.158 CC lib/ftl/utils/ftl_md.o 00:01:37.158 CC lib/ftl/utils/ftl_bitmap.o 00:01:37.158 CC lib/ftl/utils/ftl_mempool.o 00:01:37.158 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:37.158 CC lib/ftl/utils/ftl_property.o 00:01:37.158 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:37.158 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:37.158 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:37.158 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:37.158 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:37.158 CC lib/ftl/base/ftl_base_bdev.o 00:01:37.158 CC lib/ftl/base/ftl_base_dev.o 00:01:37.158 CC lib/ftl/ftl_trace.o 00:01:37.722 LIB libspdk_scsi.a 00:01:37.722 LIB libspdk_nbd.a 00:01:37.979 SO libspdk_nbd.so.7.0 00:01:37.979 SO libspdk_scsi.so.9.0 00:01:37.979 SYMLINK libspdk_nbd.so 00:01:37.979 LIB libspdk_ublk.a 00:01:37.979 SYMLINK libspdk_scsi.so 00:01:37.979 SO libspdk_ublk.so.3.0 00:01:37.979 SYMLINK libspdk_ublk.so 00:01:38.238 CC lib/vhost/vhost.o 00:01:38.238 CC lib/vhost/vhost_rpc.o 00:01:38.238 CC lib/vhost/vhost_scsi.o 00:01:38.238 CC lib/vhost/vhost_blk.o 00:01:38.238 CC lib/vhost/rte_vhost_user.o 00:01:38.238 CC lib/iscsi/conn.o 00:01:38.238 CC lib/iscsi/init_grp.o 00:01:38.238 CC lib/iscsi/iscsi.o 00:01:38.238 CC lib/iscsi/md5.o 00:01:38.238 CC lib/iscsi/param.o 00:01:38.238 CC lib/iscsi/portal_grp.o 00:01:38.238 CC lib/iscsi/tgt_node.o 00:01:38.238 CC lib/iscsi/iscsi_subsystem.o 00:01:38.238 CC lib/iscsi/task.o 00:01:38.238 CC lib/iscsi/iscsi_rpc.o 00:01:38.804 LIB libspdk_blob.a 00:01:38.804 LIB libspdk_ftl.a 00:01:38.804 SO libspdk_blob.so.11.0 00:01:38.804 SO libspdk_ftl.so.9.0 00:01:38.804 SYMLINK libspdk_blob.so 00:01:39.061 CC lib/lvol/lvol.o 00:01:39.061 CC lib/blobfs/blobfs.o 00:01:39.061 CC lib/blobfs/tree.o 00:01:39.320 SYMLINK libspdk_ftl.so 00:01:39.578 LIB libspdk_nvmf.a 00:01:39.578 SO libspdk_nvmf.so.18.1 00:01:39.578 LIB libspdk_iscsi.a 00:01:39.837 SO libspdk_iscsi.so.8.0 00:01:39.837 SYMLINK libspdk_nvmf.so 00:01:39.837 SYMLINK libspdk_iscsi.so 00:01:40.096 LIB libspdk_blobfs.a 00:01:40.096 SO libspdk_blobfs.so.10.0 00:01:40.096 LIB libspdk_lvol.a 00:01:40.096 SYMLINK libspdk_blobfs.so 00:01:40.096 SO libspdk_lvol.so.10.0 00:01:40.096 SYMLINK libspdk_lvol.so 00:01:40.355 LIB libspdk_vhost.a 00:01:40.355 SO libspdk_vhost.so.8.0 00:01:40.355 SYMLINK libspdk_vhost.so 00:01:40.921 CC module/env_dpdk/env_dpdk_rpc.o 00:01:40.921 CC module/vfu_device/vfu_virtio.o 00:01:40.921 CC module/vfu_device/vfu_virtio_blk.o 00:01:40.921 CC module/vfu_device/vfu_virtio_scsi.o 00:01:40.921 CC module/vfu_device/vfu_virtio_rpc.o 00:01:41.179 CC module/sock/posix/posix.o 00:01:41.179 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:41.179 CC module/accel/iaa/accel_iaa.o 00:01:41.179 CC module/accel/ioat/accel_ioat.o 00:01:41.179 CC module/accel/iaa/accel_iaa_rpc.o 00:01:41.179 CC module/accel/ioat/accel_ioat_rpc.o 00:01:41.179 CC module/keyring/file/keyring.o 00:01:41.179 CC module/accel/error/accel_error.o 00:01:41.179 CC module/keyring/file/keyring_rpc.o 00:01:41.179 CC module/accel/error/accel_error_rpc.o 00:01:41.179 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:41.179 CC module/scheduler/gscheduler/gscheduler.o 00:01:41.179 CC module/keyring/linux/keyring.o 00:01:41.179 CC module/keyring/linux/keyring_rpc.o 00:01:41.179 LIB libspdk_env_dpdk_rpc.a 00:01:41.179 CC module/blob/bdev/blob_bdev.o 00:01:41.179 CC module/accel/dsa/accel_dsa.o 00:01:41.179 CC module/accel/dsa/accel_dsa_rpc.o 00:01:41.179 SO libspdk_env_dpdk_rpc.so.6.0 00:01:41.179 SYMLINK libspdk_env_dpdk_rpc.so 00:01:41.438 LIB libspdk_keyring_file.a 00:01:41.438 LIB libspdk_scheduler_gscheduler.a 00:01:41.438 LIB libspdk_keyring_linux.a 00:01:41.438 LIB libspdk_scheduler_dynamic.a 00:01:41.438 LIB libspdk_scheduler_dpdk_governor.a 00:01:41.438 SO libspdk_scheduler_gscheduler.so.4.0 00:01:41.438 LIB libspdk_accel_error.a 00:01:41.438 SO libspdk_keyring_linux.so.1.0 00:01:41.438 SO libspdk_keyring_file.so.1.0 00:01:41.438 LIB libspdk_accel_dsa.a 00:01:41.438 SO libspdk_scheduler_dynamic.so.4.0 00:01:41.438 LIB libspdk_accel_iaa.a 00:01:41.438 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:41.438 LIB libspdk_accel_ioat.a 00:01:41.438 SO libspdk_accel_error.so.2.0 00:01:41.438 SO libspdk_accel_dsa.so.5.0 00:01:41.438 SO libspdk_accel_iaa.so.3.0 00:01:41.438 SYMLINK libspdk_scheduler_gscheduler.so 00:01:41.438 SO libspdk_accel_ioat.so.6.0 00:01:41.438 SYMLINK libspdk_keyring_file.so 00:01:41.438 SYMLINK libspdk_keyring_linux.so 00:01:41.438 SYMLINK libspdk_scheduler_dynamic.so 00:01:41.438 LIB libspdk_blob_bdev.a 00:01:41.438 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:41.438 SYMLINK libspdk_accel_error.so 00:01:41.438 SO libspdk_blob_bdev.so.11.0 00:01:41.438 SYMLINK libspdk_accel_dsa.so 00:01:41.438 SYMLINK libspdk_accel_iaa.so 00:01:41.438 SYMLINK libspdk_accel_ioat.so 00:01:41.438 SYMLINK libspdk_blob_bdev.so 00:01:41.697 LIB libspdk_vfu_device.a 00:01:41.697 SO libspdk_vfu_device.so.3.0 00:01:41.697 SYMLINK libspdk_vfu_device.so 00:01:41.954 LIB libspdk_sock_posix.a 00:01:41.954 SO libspdk_sock_posix.so.6.0 00:01:41.954 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:41.954 CC module/bdev/lvol/vbdev_lvol.o 00:01:41.954 CC module/blobfs/bdev/blobfs_bdev.o 00:01:41.954 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:41.954 CC module/bdev/gpt/gpt.o 00:01:41.954 CC module/bdev/gpt/vbdev_gpt.o 00:01:41.954 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:41.954 CC module/bdev/delay/vbdev_delay.o 00:01:41.954 CC module/bdev/error/vbdev_error.o 00:01:41.954 CC module/bdev/error/vbdev_error_rpc.o 00:01:41.954 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:41.954 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:41.954 CC module/bdev/nvme/bdev_nvme.o 00:01:41.954 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:41.954 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:41.954 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:41.954 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:41.954 CC module/bdev/null/bdev_null.o 00:01:41.954 CC module/bdev/nvme/nvme_rpc.o 00:01:41.954 CC module/bdev/null/bdev_null_rpc.o 00:01:41.954 CC module/bdev/malloc/bdev_malloc.o 00:01:41.954 CC module/bdev/nvme/bdev_mdns_client.o 00:01:41.954 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:41.954 CC module/bdev/nvme/vbdev_opal.o 00:01:41.954 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:41.954 CC module/bdev/passthru/vbdev_passthru.o 00:01:41.954 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:41.954 CC module/bdev/ftl/bdev_ftl.o 00:01:41.954 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:41.954 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:41.954 CC module/bdev/aio/bdev_aio.o 00:01:41.954 CC module/bdev/aio/bdev_aio_rpc.o 00:01:41.954 CC module/bdev/split/vbdev_split.o 00:01:41.954 CC module/bdev/split/vbdev_split_rpc.o 00:01:41.954 CC module/bdev/raid/bdev_raid.o 00:01:41.954 CC module/bdev/raid/bdev_raid_rpc.o 00:01:41.954 CC module/bdev/raid/bdev_raid_sb.o 00:01:41.954 CC module/bdev/raid/raid0.o 00:01:41.954 CC module/bdev/raid/raid1.o 00:01:41.954 CC module/bdev/raid/concat.o 00:01:41.954 CC module/bdev/iscsi/bdev_iscsi.o 00:01:41.954 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:42.212 SYMLINK libspdk_sock_posix.so 00:01:42.469 LIB libspdk_bdev_ftl.a 00:01:42.469 LIB libspdk_bdev_gpt.a 00:01:42.469 LIB libspdk_bdev_error.a 00:01:42.469 SO libspdk_bdev_ftl.so.6.0 00:01:42.469 LIB libspdk_bdev_split.a 00:01:42.469 LIB libspdk_bdev_null.a 00:01:42.469 LIB libspdk_bdev_passthru.a 00:01:42.469 SO libspdk_bdev_gpt.so.6.0 00:01:42.469 SO libspdk_bdev_error.so.6.0 00:01:42.469 LIB libspdk_blobfs_bdev.a 00:01:42.469 SO libspdk_bdev_split.so.6.0 00:01:42.469 SO libspdk_bdev_null.so.6.0 00:01:42.469 SO libspdk_bdev_passthru.so.6.0 00:01:42.469 LIB libspdk_bdev_aio.a 00:01:42.469 LIB libspdk_bdev_zone_block.a 00:01:42.469 SO libspdk_blobfs_bdev.so.6.0 00:01:42.469 LIB libspdk_bdev_delay.a 00:01:42.469 SYMLINK libspdk_bdev_ftl.so 00:01:42.469 SYMLINK libspdk_bdev_error.so 00:01:42.469 SYMLINK libspdk_bdev_gpt.so 00:01:42.469 LIB libspdk_bdev_iscsi.a 00:01:42.469 SO libspdk_bdev_zone_block.so.6.0 00:01:42.469 SO libspdk_bdev_aio.so.6.0 00:01:42.469 SYMLINK libspdk_bdev_null.so 00:01:42.469 SO libspdk_bdev_delay.so.6.0 00:01:42.469 LIB libspdk_bdev_malloc.a 00:01:42.469 SYMLINK libspdk_bdev_passthru.so 00:01:42.469 SYMLINK libspdk_blobfs_bdev.so 00:01:42.469 SO libspdk_bdev_iscsi.so.6.0 00:01:42.469 SO libspdk_bdev_malloc.so.6.0 00:01:42.469 SYMLINK libspdk_bdev_split.so 00:01:42.469 SYMLINK libspdk_bdev_zone_block.so 00:01:42.727 SYMLINK libspdk_bdev_aio.so 00:01:42.727 SYMLINK libspdk_bdev_delay.so 00:01:42.727 SYMLINK libspdk_bdev_iscsi.so 00:01:42.727 LIB libspdk_bdev_lvol.a 00:01:42.727 SO libspdk_bdev_lvol.so.6.0 00:01:42.727 LIB libspdk_bdev_virtio.a 00:01:42.727 SYMLINK libspdk_bdev_malloc.so 00:01:42.727 SO libspdk_bdev_virtio.so.6.0 00:01:42.727 SYMLINK libspdk_bdev_lvol.so 00:01:42.727 SYMLINK libspdk_bdev_virtio.so 00:01:42.985 LIB libspdk_bdev_raid.a 00:01:43.332 SO libspdk_bdev_raid.so.6.0 00:01:43.332 SYMLINK libspdk_bdev_raid.so 00:01:44.267 LIB libspdk_bdev_nvme.a 00:01:44.526 SO libspdk_bdev_nvme.so.7.0 00:01:44.526 SYMLINK libspdk_bdev_nvme.so 00:01:45.094 CC module/event/subsystems/vmd/vmd.o 00:01:45.094 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.094 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.094 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.094 CC module/event/subsystems/sock/sock.o 00:01:45.094 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.094 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:45.094 CC module/event/subsystems/keyring/keyring.o 00:01:45.094 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:45.351 LIB libspdk_event_vmd.a 00:01:45.351 LIB libspdk_event_keyring.a 00:01:45.351 LIB libspdk_event_iobuf.a 00:01:45.351 LIB libspdk_event_vhost_blk.a 00:01:45.351 LIB libspdk_event_scheduler.a 00:01:45.351 LIB libspdk_event_vfu_tgt.a 00:01:45.351 SO libspdk_event_vmd.so.6.0 00:01:45.351 SO libspdk_event_keyring.so.1.0 00:01:45.351 SO libspdk_event_iobuf.so.3.0 00:01:45.351 SO libspdk_event_vhost_blk.so.3.0 00:01:45.351 SO libspdk_event_scheduler.so.4.0 00:01:45.351 SO libspdk_event_vfu_tgt.so.3.0 00:01:45.351 SYMLINK libspdk_event_vmd.so 00:01:45.351 SYMLINK libspdk_event_keyring.so 00:01:45.609 SYMLINK libspdk_event_vhost_blk.so 00:01:45.609 SYMLINK libspdk_event_scheduler.so 00:01:45.609 SYMLINK libspdk_event_iobuf.so 00:01:45.609 SYMLINK libspdk_event_vfu_tgt.so 00:01:45.609 LIB libspdk_event_sock.a 00:01:45.609 SO libspdk_event_sock.so.5.0 00:01:45.609 SYMLINK libspdk_event_sock.so 00:01:45.868 CC module/event/subsystems/accel/accel.o 00:01:45.868 LIB libspdk_event_accel.a 00:01:46.127 SO libspdk_event_accel.so.6.0 00:01:46.127 SYMLINK libspdk_event_accel.so 00:01:46.386 CC module/event/subsystems/bdev/bdev.o 00:01:46.645 LIB libspdk_event_bdev.a 00:01:46.645 SO libspdk_event_bdev.so.6.0 00:01:46.645 SYMLINK libspdk_event_bdev.so 00:01:46.920 CC module/event/subsystems/nbd/nbd.o 00:01:46.920 CC module/event/subsystems/ublk/ublk.o 00:01:46.920 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:46.920 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:46.920 CC module/event/subsystems/scsi/scsi.o 00:01:47.178 LIB libspdk_event_nbd.a 00:01:47.178 LIB libspdk_event_ublk.a 00:01:47.178 LIB libspdk_event_scsi.a 00:01:47.178 SO libspdk_event_ublk.so.3.0 00:01:47.178 SO libspdk_event_nbd.so.6.0 00:01:47.178 SO libspdk_event_scsi.so.6.0 00:01:47.178 LIB libspdk_event_nvmf.a 00:01:47.178 SYMLINK libspdk_event_ublk.so 00:01:47.178 SYMLINK libspdk_event_nbd.so 00:01:47.437 SYMLINK libspdk_event_scsi.so 00:01:47.437 SO libspdk_event_nvmf.so.6.0 00:01:47.437 SYMLINK libspdk_event_nvmf.so 00:01:47.696 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.696 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.696 LIB libspdk_event_vhost_scsi.a 00:01:47.696 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.696 LIB libspdk_event_iscsi.a 00:01:47.955 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.955 SO libspdk_event_iscsi.so.6.0 00:01:47.955 SYMLINK libspdk_event_iscsi.so 00:01:48.214 SO libspdk.so.6.0 00:01:48.214 SYMLINK libspdk.so 00:01:48.476 TEST_HEADER include/spdk/accel.h 00:01:48.476 CXX app/trace/trace.o 00:01:48.476 TEST_HEADER include/spdk/accel_module.h 00:01:48.476 CC app/trace_record/trace_record.o 00:01:48.476 TEST_HEADER include/spdk/assert.h 00:01:48.476 TEST_HEADER include/spdk/base64.h 00:01:48.476 TEST_HEADER include/spdk/barrier.h 00:01:48.476 TEST_HEADER include/spdk/bdev.h 00:01:48.476 TEST_HEADER include/spdk/bdev_zone.h 00:01:48.476 TEST_HEADER include/spdk/bdev_module.h 00:01:48.476 TEST_HEADER include/spdk/bit_array.h 00:01:48.476 TEST_HEADER include/spdk/bit_pool.h 00:01:48.476 CC app/spdk_nvme_perf/perf.o 00:01:48.476 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:48.476 CC app/spdk_nvme_identify/identify.o 00:01:48.476 TEST_HEADER include/spdk/blob_bdev.h 00:01:48.476 TEST_HEADER include/spdk/blobfs.h 00:01:48.476 TEST_HEADER include/spdk/blob.h 00:01:48.476 CC app/spdk_top/spdk_top.o 00:01:48.476 TEST_HEADER include/spdk/conf.h 00:01:48.476 CC test/rpc_client/rpc_client_test.o 00:01:48.476 TEST_HEADER include/spdk/config.h 00:01:48.476 TEST_HEADER include/spdk/cpuset.h 00:01:48.476 CC app/spdk_nvme_discover/discovery_aer.o 00:01:48.476 TEST_HEADER include/spdk/crc16.h 00:01:48.476 TEST_HEADER include/spdk/crc32.h 00:01:48.476 TEST_HEADER include/spdk/crc64.h 00:01:48.476 TEST_HEADER include/spdk/dif.h 00:01:48.476 CC app/spdk_lspci/spdk_lspci.o 00:01:48.476 TEST_HEADER include/spdk/dma.h 00:01:48.476 TEST_HEADER include/spdk/env_dpdk.h 00:01:48.476 TEST_HEADER include/spdk/endian.h 00:01:48.476 TEST_HEADER include/spdk/env.h 00:01:48.476 TEST_HEADER include/spdk/event.h 00:01:48.476 TEST_HEADER include/spdk/fd_group.h 00:01:48.476 TEST_HEADER include/spdk/fd.h 00:01:48.476 TEST_HEADER include/spdk/ftl.h 00:01:48.476 TEST_HEADER include/spdk/file.h 00:01:48.476 TEST_HEADER include/spdk/gpt_spec.h 00:01:48.476 TEST_HEADER include/spdk/hexlify.h 00:01:48.476 TEST_HEADER include/spdk/histogram_data.h 00:01:48.476 TEST_HEADER include/spdk/idxd.h 00:01:48.476 TEST_HEADER include/spdk/idxd_spec.h 00:01:48.476 TEST_HEADER include/spdk/init.h 00:01:48.476 TEST_HEADER include/spdk/ioat.h 00:01:48.476 TEST_HEADER include/spdk/ioat_spec.h 00:01:48.476 TEST_HEADER include/spdk/iscsi_spec.h 00:01:48.476 TEST_HEADER include/spdk/json.h 00:01:48.476 TEST_HEADER include/spdk/jsonrpc.h 00:01:48.476 TEST_HEADER include/spdk/keyring.h 00:01:48.476 TEST_HEADER include/spdk/keyring_module.h 00:01:48.476 TEST_HEADER include/spdk/likely.h 00:01:48.476 TEST_HEADER include/spdk/log.h 00:01:48.476 TEST_HEADER include/spdk/lvol.h 00:01:48.476 TEST_HEADER include/spdk/mmio.h 00:01:48.476 TEST_HEADER include/spdk/memory.h 00:01:48.476 TEST_HEADER include/spdk/nbd.h 00:01:48.476 TEST_HEADER include/spdk/notify.h 00:01:48.476 TEST_HEADER include/spdk/nvme.h 00:01:48.476 TEST_HEADER include/spdk/nvme_intel.h 00:01:48.476 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:48.476 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:48.476 TEST_HEADER include/spdk/nvme_spec.h 00:01:48.476 TEST_HEADER include/spdk/nvme_zns.h 00:01:48.476 TEST_HEADER include/spdk/nvmf.h 00:01:48.476 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:48.476 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:48.476 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:48.476 TEST_HEADER include/spdk/nvmf_spec.h 00:01:48.476 CC app/iscsi_tgt/iscsi_tgt.o 00:01:48.476 TEST_HEADER include/spdk/opal.h 00:01:48.476 CC app/spdk_dd/spdk_dd.o 00:01:48.476 TEST_HEADER include/spdk/nvmf_transport.h 00:01:48.476 TEST_HEADER include/spdk/opal_spec.h 00:01:48.476 TEST_HEADER include/spdk/pci_ids.h 00:01:48.476 TEST_HEADER include/spdk/pipe.h 00:01:48.476 TEST_HEADER include/spdk/queue.h 00:01:48.476 TEST_HEADER include/spdk/reduce.h 00:01:48.476 TEST_HEADER include/spdk/scheduler.h 00:01:48.476 TEST_HEADER include/spdk/rpc.h 00:01:48.476 TEST_HEADER include/spdk/scsi.h 00:01:48.476 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.476 CC app/nvmf_tgt/nvmf_main.o 00:01:48.476 TEST_HEADER include/spdk/sock.h 00:01:48.476 TEST_HEADER include/spdk/stdinc.h 00:01:48.476 TEST_HEADER include/spdk/string.h 00:01:48.476 TEST_HEADER include/spdk/thread.h 00:01:48.476 TEST_HEADER include/spdk/trace_parser.h 00:01:48.476 TEST_HEADER include/spdk/trace.h 00:01:48.476 TEST_HEADER include/spdk/tree.h 00:01:48.476 TEST_HEADER include/spdk/ublk.h 00:01:48.476 TEST_HEADER include/spdk/util.h 00:01:48.476 TEST_HEADER include/spdk/version.h 00:01:48.476 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.476 TEST_HEADER include/spdk/uuid.h 00:01:48.476 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.476 TEST_HEADER include/spdk/vhost.h 00:01:48.476 TEST_HEADER include/spdk/vmd.h 00:01:48.476 TEST_HEADER include/spdk/xor.h 00:01:48.476 TEST_HEADER include/spdk/zipf.h 00:01:48.476 CXX test/cpp_headers/accel_module.o 00:01:48.476 CXX test/cpp_headers/accel.o 00:01:48.476 CXX test/cpp_headers/assert.o 00:01:48.476 CXX test/cpp_headers/barrier.o 00:01:48.476 CXX test/cpp_headers/bdev.o 00:01:48.476 CXX test/cpp_headers/base64.o 00:01:48.476 CXX test/cpp_headers/bdev_zone.o 00:01:48.476 CXX test/cpp_headers/bdev_module.o 00:01:48.476 CXX test/cpp_headers/bit_array.o 00:01:48.476 CXX test/cpp_headers/bit_pool.o 00:01:48.476 CXX test/cpp_headers/blob_bdev.o 00:01:48.476 CXX test/cpp_headers/blob.o 00:01:48.476 CXX test/cpp_headers/blobfs.o 00:01:48.476 CXX test/cpp_headers/blobfs_bdev.o 00:01:48.476 CXX test/cpp_headers/config.o 00:01:48.476 CXX test/cpp_headers/conf.o 00:01:48.476 CXX test/cpp_headers/cpuset.o 00:01:48.476 CXX test/cpp_headers/crc32.o 00:01:48.476 CC app/spdk_tgt/spdk_tgt.o 00:01:48.476 CXX test/cpp_headers/crc16.o 00:01:48.476 CXX test/cpp_headers/dif.o 00:01:48.476 CXX test/cpp_headers/endian.o 00:01:48.476 CXX test/cpp_headers/crc64.o 00:01:48.476 CXX test/cpp_headers/dma.o 00:01:48.476 CXX test/cpp_headers/env.o 00:01:48.476 CXX test/cpp_headers/env_dpdk.o 00:01:48.476 CXX test/cpp_headers/event.o 00:01:48.476 CXX test/cpp_headers/fd_group.o 00:01:48.476 CXX test/cpp_headers/fd.o 00:01:48.476 CXX test/cpp_headers/file.o 00:01:48.476 CXX test/cpp_headers/gpt_spec.o 00:01:48.476 CXX test/cpp_headers/ftl.o 00:01:48.476 CXX test/cpp_headers/hexlify.o 00:01:48.476 CXX test/cpp_headers/histogram_data.o 00:01:48.476 CXX test/cpp_headers/idxd.o 00:01:48.476 CXX test/cpp_headers/init.o 00:01:48.476 CXX test/cpp_headers/ioat.o 00:01:48.476 CXX test/cpp_headers/idxd_spec.o 00:01:48.476 CXX test/cpp_headers/ioat_spec.o 00:01:48.476 CXX test/cpp_headers/iscsi_spec.o 00:01:48.476 CXX test/cpp_headers/json.o 00:01:48.476 CXX test/cpp_headers/jsonrpc.o 00:01:48.476 CXX test/cpp_headers/keyring_module.o 00:01:48.476 CXX test/cpp_headers/keyring.o 00:01:48.476 CXX test/cpp_headers/log.o 00:01:48.476 CXX test/cpp_headers/likely.o 00:01:48.476 CXX test/cpp_headers/lvol.o 00:01:48.476 CXX test/cpp_headers/memory.o 00:01:48.476 CXX test/cpp_headers/nbd.o 00:01:48.477 CXX test/cpp_headers/nvme_intel.o 00:01:48.477 CXX test/cpp_headers/notify.o 00:01:48.477 CXX test/cpp_headers/mmio.o 00:01:48.477 CXX test/cpp_headers/nvme.o 00:01:48.477 CXX test/cpp_headers/nvme_ocssd.o 00:01:48.477 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:48.477 CXX test/cpp_headers/nvme_zns.o 00:01:48.477 CXX test/cpp_headers/nvme_spec.o 00:01:48.477 CXX test/cpp_headers/nvmf_cmd.o 00:01:48.477 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:48.751 CXX test/cpp_headers/nvmf.o 00:01:48.751 CXX test/cpp_headers/nvmf_spec.o 00:01:48.751 CXX test/cpp_headers/nvmf_transport.o 00:01:48.751 CXX test/cpp_headers/opal.o 00:01:48.751 CXX test/cpp_headers/opal_spec.o 00:01:48.751 CXX test/cpp_headers/pipe.o 00:01:48.751 CXX test/cpp_headers/pci_ids.o 00:01:48.751 CXX test/cpp_headers/queue.o 00:01:48.751 CXX test/cpp_headers/reduce.o 00:01:48.751 CXX test/cpp_headers/rpc.o 00:01:48.751 CXX test/cpp_headers/scsi.o 00:01:48.751 CXX test/cpp_headers/scheduler.o 00:01:48.751 CXX test/cpp_headers/scsi_spec.o 00:01:48.751 CXX test/cpp_headers/sock.o 00:01:48.751 CXX test/cpp_headers/stdinc.o 00:01:48.751 CXX test/cpp_headers/thread.o 00:01:48.751 CXX test/cpp_headers/string.o 00:01:48.751 CXX test/cpp_headers/trace_parser.o 00:01:48.751 CXX test/cpp_headers/trace.o 00:01:48.751 CXX test/cpp_headers/tree.o 00:01:48.751 CXX test/cpp_headers/ublk.o 00:01:48.751 CXX test/cpp_headers/uuid.o 00:01:48.751 CXX test/cpp_headers/util.o 00:01:48.751 CXX test/cpp_headers/version.o 00:01:48.751 CXX test/cpp_headers/vfio_user_pci.o 00:01:48.751 CC test/env/memory/memory_ut.o 00:01:48.751 CC test/thread/poller_perf/poller_perf.o 00:01:48.751 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:48.751 CXX test/cpp_headers/vfio_user_spec.o 00:01:48.751 CC test/env/vtophys/vtophys.o 00:01:48.751 CC test/app/jsoncat/jsoncat.o 00:01:48.751 CC examples/ioat/perf/perf.o 00:01:48.751 CC test/app/histogram_perf/histogram_perf.o 00:01:48.751 CC examples/ioat/verify/verify.o 00:01:48.751 CC examples/util/zipf/zipf.o 00:01:48.751 CC app/fio/nvme/fio_plugin.o 00:01:48.751 CC test/env/pci/pci_ut.o 00:01:48.751 CC test/dma/test_dma/test_dma.o 00:01:48.751 CXX test/cpp_headers/vhost.o 00:01:48.751 CC test/app/stub/stub.o 00:01:49.039 CC app/fio/bdev/fio_plugin.o 00:01:49.039 CC test/app/bdev_svc/bdev_svc.o 00:01:49.039 LINK spdk_lspci 00:01:49.305 LINK spdk_nvme_discover 00:01:49.305 LINK rpc_client_test 00:01:49.305 LINK nvmf_tgt 00:01:49.305 LINK spdk_trace_record 00:01:49.305 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:49.305 CC test/env/mem_callbacks/mem_callbacks.o 00:01:49.305 LINK vtophys 00:01:49.305 CXX test/cpp_headers/vmd.o 00:01:49.305 LINK poller_perf 00:01:49.305 CXX test/cpp_headers/xor.o 00:01:49.305 CXX test/cpp_headers/zipf.o 00:01:49.305 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:49.305 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.563 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:49.563 LINK interrupt_tgt 00:01:49.563 LINK spdk_tgt 00:01:49.563 LINK bdev_svc 00:01:49.563 LINK verify 00:01:49.563 LINK jsoncat 00:01:49.563 LINK iscsi_tgt 00:01:49.563 LINK ioat_perf 00:01:49.563 LINK env_dpdk_post_init 00:01:49.563 LINK histogram_perf 00:01:49.563 LINK spdk_trace 00:01:49.563 LINK stub 00:01:49.563 LINK zipf 00:01:49.820 LINK spdk_dd 00:01:49.820 LINK pci_ut 00:01:49.820 LINK spdk_bdev 00:01:49.820 LINK vhost_fuzz 00:01:50.077 CC test/event/reactor_perf/reactor_perf.o 00:01:50.077 CC test/event/reactor/reactor.o 00:01:50.077 CC test/event/event_perf/event_perf.o 00:01:50.077 CC test/event/app_repeat/app_repeat.o 00:01:50.077 CC app/vhost/vhost.o 00:01:50.077 LINK nvme_fuzz 00:01:50.077 CC test/event/scheduler/scheduler.o 00:01:50.077 CC examples/vmd/lsvmd/lsvmd.o 00:01:50.077 CC examples/idxd/perf/perf.o 00:01:50.077 LINK spdk_nvme_perf 00:01:50.077 CC examples/vmd/led/led.o 00:01:50.077 CC examples/sock/hello_world/hello_sock.o 00:01:50.077 LINK test_dma 00:01:50.077 LINK mem_callbacks 00:01:50.077 CC examples/thread/thread/thread_ex.o 00:01:50.077 LINK spdk_top 00:01:50.077 LINK reactor 00:01:50.077 LINK reactor_perf 00:01:50.077 LINK event_perf 00:01:50.336 LINK app_repeat 00:01:50.336 LINK spdk_nvme_identify 00:01:50.336 LINK vhost 00:01:50.336 LINK led 00:01:50.336 LINK lsvmd 00:01:50.336 LINK hello_sock 00:01:50.336 LINK spdk_nvme 00:01:50.336 LINK thread 00:01:50.336 LINK idxd_perf 00:01:50.594 LINK memory_ut 00:01:50.594 LINK scheduler 00:01:50.594 CC test/nvme/aer/aer.o 00:01:50.594 CC test/nvme/err_injection/err_injection.o 00:01:50.594 CC test/nvme/reserve/reserve.o 00:01:50.594 CC test/nvme/reset/reset.o 00:01:50.594 CC test/nvme/boot_partition/boot_partition.o 00:01:50.594 CC test/nvme/simple_copy/simple_copy.o 00:01:50.594 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:50.594 CC test/nvme/overhead/overhead.o 00:01:50.594 CC test/nvme/e2edp/nvme_dp.o 00:01:50.594 CC test/nvme/cuse/cuse.o 00:01:50.594 CC test/nvme/connect_stress/connect_stress.o 00:01:50.594 CC test/nvme/startup/startup.o 00:01:50.594 CC test/nvme/sgl/sgl.o 00:01:50.594 CC test/nvme/compliance/nvme_compliance.o 00:01:50.594 CC test/nvme/fdp/fdp.o 00:01:50.594 CC test/blobfs/mkfs/mkfs.o 00:01:50.594 CC test/nvme/fused_ordering/fused_ordering.o 00:01:50.594 CC test/accel/dif/dif.o 00:01:50.852 CC test/lvol/esnap/esnap.o 00:01:50.852 LINK doorbell_aers 00:01:50.852 LINK boot_partition 00:01:50.852 CC examples/nvme/reconnect/reconnect.o 00:01:50.852 LINK startup 00:01:50.852 CC examples/nvme/arbitration/arbitration.o 00:01:50.852 LINK err_injection 00:01:50.852 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:50.852 CC examples/nvme/hotplug/hotplug.o 00:01:50.852 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:50.852 CC examples/nvme/abort/abort.o 00:01:50.852 CC examples/nvme/hello_world/hello_world.o 00:01:50.852 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:50.852 LINK connect_stress 00:01:50.852 LINK reserve 00:01:50.852 LINK fused_ordering 00:01:50.852 LINK mkfs 00:01:50.852 LINK aer 00:01:50.852 LINK reset 00:01:50.852 LINK overhead 00:01:50.852 LINK nvme_dp 00:01:50.852 LINK sgl 00:01:51.110 CC examples/accel/perf/accel_perf.o 00:01:51.110 LINK nvme_compliance 00:01:51.110 LINK fdp 00:01:51.110 CC examples/blob/cli/blobcli.o 00:01:51.110 CC examples/blob/hello_world/hello_blob.o 00:01:51.110 LINK pmr_persistence 00:01:51.110 LINK iscsi_fuzz 00:01:51.110 LINK simple_copy 00:01:51.110 LINK cmb_copy 00:01:51.110 LINK hotplug 00:01:51.110 LINK hello_world 00:01:51.110 LINK dif 00:01:51.110 LINK abort 00:01:51.110 LINK arbitration 00:01:51.110 LINK reconnect 00:01:51.368 LINK nvme_manage 00:01:51.368 LINK hello_blob 00:01:51.625 LINK accel_perf 00:01:51.625 LINK blobcli 00:01:51.625 CC test/bdev/bdevio/bdevio.o 00:01:51.883 LINK cuse 00:01:52.141 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.141 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.141 LINK bdevio 00:01:52.398 LINK hello_bdev 00:01:52.964 LINK bdevperf 00:01:53.531 CC examples/nvmf/nvmf/nvmf.o 00:01:54.098 LINK nvmf 00:01:56.002 LINK esnap 00:01:56.262 00:01:56.262 real 0m54.431s 00:01:56.262 user 8m32.203s 00:01:56.262 sys 4m17.761s 00:01:56.262 11:17:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:56.262 11:17:30 make -- common/autotest_common.sh@10 -- $ set +x 00:01:56.262 ************************************ 00:01:56.262 END TEST make 00:01:56.262 ************************************ 00:01:56.262 11:17:30 -- common/autotest_common.sh@1142 -- $ return 0 00:01:56.262 11:17:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:56.262 11:17:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:56.262 11:17:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:56.262 11:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.262 11:17:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:56.262 11:17:30 -- pm/common@44 -- $ pid=2469666 00:01:56.262 11:17:30 -- pm/common@50 -- $ kill -TERM 2469666 00:01:56.262 11:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.262 11:17:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:56.262 11:17:30 -- pm/common@44 -- $ pid=2469667 00:01:56.262 11:17:30 -- pm/common@50 -- $ kill -TERM 2469667 00:01:56.262 11:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.262 11:17:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:56.262 11:17:30 -- pm/common@44 -- $ pid=2469669 00:01:56.262 11:17:30 -- pm/common@50 -- $ kill -TERM 2469669 00:01:56.262 11:17:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.262 11:17:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:56.262 11:17:30 -- pm/common@44 -- $ pid=2469695 00:01:56.262 11:17:30 -- pm/common@50 -- $ sudo -E kill -TERM 2469695 00:01:56.521 11:17:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:56.521 11:17:30 -- nvmf/common.sh@7 -- # uname -s 00:01:56.521 11:17:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:56.521 11:17:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:56.521 11:17:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:56.521 11:17:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:56.521 11:17:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:56.521 11:17:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:56.521 11:17:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:56.521 11:17:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:56.521 11:17:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:56.521 11:17:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:56.521 11:17:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:01:56.521 11:17:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:01:56.521 11:17:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:56.521 11:17:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:56.521 11:17:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:56.521 11:17:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:56.521 11:17:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:56.521 11:17:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:56.521 11:17:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.521 11:17:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.521 11:17:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.521 11:17:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.521 11:17:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.521 11:17:30 -- paths/export.sh@5 -- # export PATH 00:01:56.521 11:17:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.521 11:17:30 -- nvmf/common.sh@47 -- # : 0 00:01:56.521 11:17:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:56.521 11:17:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:56.521 11:17:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:56.521 11:17:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:56.522 11:17:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:56.522 11:17:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:56.522 11:17:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:56.522 11:17:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:56.522 11:17:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:56.522 11:17:30 -- spdk/autotest.sh@32 -- # uname -s 00:01:56.522 11:17:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:56.522 11:17:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:56.522 11:17:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.522 11:17:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:56.522 11:17:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.522 11:17:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:56.522 11:17:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:56.522 11:17:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:56.522 11:17:30 -- spdk/autotest.sh@48 -- # udevadm_pid=2532268 00:01:56.522 11:17:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:56.522 11:17:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:56.522 11:17:30 -- pm/common@17 -- # local monitor 00:01:56.522 11:17:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.522 11:17:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.522 11:17:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.522 11:17:30 -- pm/common@21 -- # date +%s 00:01:56.522 11:17:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.522 11:17:30 -- pm/common@21 -- # date +%s 00:01:56.522 11:17:30 -- pm/common@25 -- # sleep 1 00:01:56.522 11:17:30 -- pm/common@21 -- # date +%s 00:01:56.522 11:17:30 -- pm/common@21 -- # date +%s 00:01:56.522 11:17:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035050 00:01:56.522 11:17:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035050 00:01:56.522 11:17:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035050 00:01:56.522 11:17:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035050 00:01:56.522 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035050_collect-vmstat.pm.log 00:01:56.522 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035050_collect-cpu-load.pm.log 00:01:56.522 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035050_collect-cpu-temp.pm.log 00:01:56.522 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035050_collect-bmc-pm.bmc.pm.log 00:01:57.458 11:17:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:57.458 11:17:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:57.458 11:17:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:57.458 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:01:57.458 11:17:31 -- spdk/autotest.sh@59 -- # create_test_list 00:01:57.458 11:17:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:57.458 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:01:57.458 11:17:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:57.458 11:17:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.458 11:17:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.458 11:17:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.458 11:17:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.458 11:17:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:57.458 11:17:31 -- common/autotest_common.sh@1455 -- # uname 00:01:57.458 11:17:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:57.458 11:17:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:57.459 11:17:31 -- common/autotest_common.sh@1475 -- # uname 00:01:57.459 11:17:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:57.459 11:17:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:57.459 11:17:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:57.459 11:17:31 -- spdk/autotest.sh@72 -- # hash lcov 00:01:57.459 11:17:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:57.459 11:17:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:57.459 --rc lcov_branch_coverage=1 00:01:57.459 --rc lcov_function_coverage=1 00:01:57.459 --rc genhtml_branch_coverage=1 00:01:57.459 --rc genhtml_function_coverage=1 00:01:57.459 --rc genhtml_legend=1 00:01:57.459 --rc geninfo_all_blocks=1 00:01:57.459 ' 00:01:57.459 11:17:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:57.459 --rc lcov_branch_coverage=1 00:01:57.459 --rc lcov_function_coverage=1 00:01:57.459 --rc genhtml_branch_coverage=1 00:01:57.459 --rc genhtml_function_coverage=1 00:01:57.459 --rc genhtml_legend=1 00:01:57.459 --rc geninfo_all_blocks=1 00:01:57.459 ' 00:01:57.459 11:17:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:57.459 --rc lcov_branch_coverage=1 00:01:57.459 --rc lcov_function_coverage=1 00:01:57.459 --rc genhtml_branch_coverage=1 00:01:57.459 --rc genhtml_function_coverage=1 00:01:57.459 --rc genhtml_legend=1 00:01:57.459 --rc geninfo_all_blocks=1 00:01:57.459 --no-external' 00:01:57.459 11:17:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:57.459 --rc lcov_branch_coverage=1 00:01:57.459 --rc lcov_function_coverage=1 00:01:57.459 --rc genhtml_branch_coverage=1 00:01:57.459 --rc genhtml_function_coverage=1 00:01:57.459 --rc genhtml_legend=1 00:01:57.459 --rc geninfo_all_blocks=1 00:01:57.459 --no-external' 00:01:57.459 11:17:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:57.717 lcov: LCOV version 1.14 00:01:57.717 11:17:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:02.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:02.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:02.990 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:03.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:03.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:03.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:03.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:03.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:03.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:03.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:03.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:03.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:03.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:03.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:03.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:30.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:30.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:40.050 11:18:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:40.050 11:18:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:40.050 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:02:40.050 11:18:13 -- spdk/autotest.sh@91 -- # rm -f 00:02:40.050 11:18:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.955 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:02:41.955 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:41.955 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:41.955 11:18:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:41.955 11:18:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:41.955 11:18:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:41.955 11:18:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:41.955 11:18:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:41.955 11:18:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:41.955 11:18:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:41.955 11:18:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:41.955 11:18:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:41.955 11:18:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:41.955 11:18:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:41.955 11:18:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:41.955 11:18:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:41.955 11:18:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:41.955 11:18:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:42.215 No valid GPT data, bailing 00:02:42.215 11:18:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:42.215 11:18:16 -- scripts/common.sh@391 -- # pt= 00:02:42.215 11:18:16 -- scripts/common.sh@392 -- # return 1 00:02:42.215 11:18:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:42.215 1+0 records in 00:02:42.215 1+0 records out 00:02:42.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597142 s, 176 MB/s 00:02:42.215 11:18:16 -- spdk/autotest.sh@118 -- # sync 00:02:42.215 11:18:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:42.215 11:18:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:42.215 11:18:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:48.784 11:18:22 -- spdk/autotest.sh@124 -- # uname -s 00:02:48.784 11:18:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:48.784 11:18:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.784 11:18:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.784 11:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.784 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:02:48.784 ************************************ 00:02:48.784 START TEST setup.sh 00:02:48.784 ************************************ 00:02:48.784 11:18:22 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.784 * Looking for test storage... 00:02:48.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.784 11:18:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:48.784 11:18:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:48.784 11:18:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.784 11:18:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.784 11:18:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.784 11:18:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:48.784 ************************************ 00:02:48.784 START TEST acl 00:02:48.784 ************************************ 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.784 * Looking for test storage... 00:02:48.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.784 11:18:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:48.784 11:18:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:48.784 11:18:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.784 11:18:22 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.319 11:18:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:51.319 11:18:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:51.319 11:18:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.319 11:18:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:51.319 11:18:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.319 11:18:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:54.609 Hugepages 00:02:54.609 node hugesize free / total 00:02:54.609 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 00:02:54.610 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:54.610 11:18:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:54.610 11:18:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.610 11:18:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.610 11:18:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:54.610 ************************************ 00:02:54.610 START TEST denied 00:02:54.610 ************************************ 00:02:54.610 11:18:28 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:54.610 11:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:02:54.610 11:18:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:54.610 11:18:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:02:54.610 11:18:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.610 11:18:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.907 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.907 11:18:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.105 00:03:02.105 real 0m7.195s 00:03:02.105 user 0m2.375s 00:03:02.105 sys 0m4.079s 00:03:02.105 11:18:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.105 11:18:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:02.105 ************************************ 00:03:02.105 END TEST denied 00:03:02.105 ************************************ 00:03:02.105 11:18:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:02.105 11:18:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.105 11:18:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.105 11:18:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.105 11:18:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.105 ************************************ 00:03:02.105 START TEST allowed 00:03:02.105 ************************************ 00:03:02.105 11:18:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:02.105 11:18:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:02.105 11:18:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:02.105 11:18:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:02.105 11:18:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.105 11:18:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.456 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.456 11:18:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:06.456 11:18:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:06.456 11:18:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:06.456 11:18:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.456 11:18:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.120 00:03:09.120 real 0m7.226s 00:03:09.120 user 0m2.194s 00:03:09.120 sys 0m4.110s 00:03:09.120 11:18:43 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.120 11:18:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:09.120 ************************************ 00:03:09.120 END TEST allowed 00:03:09.120 ************************************ 00:03:09.120 11:18:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:09.120 00:03:09.120 real 0m20.788s 00:03:09.120 user 0m6.908s 00:03:09.120 sys 0m12.426s 00:03:09.120 11:18:43 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.120 11:18:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.120 ************************************ 00:03:09.120 END TEST acl 00:03:09.120 ************************************ 00:03:09.120 11:18:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:09.120 11:18:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.120 11:18:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.120 11:18:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.120 11:18:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.120 ************************************ 00:03:09.120 START TEST hugepages 00:03:09.120 ************************************ 00:03:09.120 11:18:43 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.120 * Looking for test storage... 00:03:09.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69579052 kB' 'MemAvailable: 73041872 kB' 'Buffers: 3736 kB' 'Cached: 14535868 kB' 'SwapCached: 0 kB' 'Active: 11685440 kB' 'Inactive: 3529992 kB' 'Active(anon): 11233128 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679240 kB' 'Mapped: 203084 kB' 'Shmem: 10557300 kB' 'KReclaimable: 267820 kB' 'Slab: 907648 kB' 'SReclaimable: 267820 kB' 'SUnreclaim: 639828 kB' 'KernelStack: 23184 kB' 'PageTables: 10784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12681196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.120 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.121 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:09.122 11:18:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:09.122 11:18:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.122 11:18:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.122 11:18:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.122 ************************************ 00:03:09.122 START TEST default_setup 00:03:09.122 ************************************ 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.122 11:18:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.415 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.415 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.983 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71749576 kB' 'MemAvailable: 75212332 kB' 'Buffers: 3736 kB' 'Cached: 14535972 kB' 'SwapCached: 0 kB' 'Active: 11701812 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249500 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695576 kB' 'Mapped: 202920 kB' 'Shmem: 10557404 kB' 'KReclaimable: 267692 kB' 'Slab: 906072 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638380 kB' 'KernelStack: 22720 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12701192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220104 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.250 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.251 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71750752 kB' 'MemAvailable: 75213508 kB' 'Buffers: 3736 kB' 'Cached: 14535976 kB' 'SwapCached: 0 kB' 'Active: 11702584 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250272 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 696456 kB' 'Mapped: 202924 kB' 'Shmem: 10557408 kB' 'KReclaimable: 267692 kB' 'Slab: 906088 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638396 kB' 'KernelStack: 22720 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12702820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220104 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.252 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71751040 kB' 'MemAvailable: 75213796 kB' 'Buffers: 3736 kB' 'Cached: 14535976 kB' 'SwapCached: 0 kB' 'Active: 11702352 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250040 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 696220 kB' 'Mapped: 202924 kB' 'Shmem: 10557408 kB' 'KReclaimable: 267692 kB' 'Slab: 906080 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638388 kB' 'KernelStack: 22768 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12702840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220136 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.253 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.254 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.255 nr_hugepages=1024 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.255 resv_hugepages=0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.255 surplus_hugepages=0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.255 anon_hugepages=0 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71752892 kB' 'MemAvailable: 75215648 kB' 'Buffers: 3736 kB' 'Cached: 14536016 kB' 'SwapCached: 0 kB' 'Active: 11702732 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250420 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 696184 kB' 'Mapped: 202924 kB' 'Shmem: 10557448 kB' 'KReclaimable: 267692 kB' 'Slab: 905920 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638228 kB' 'KernelStack: 22896 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12702616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.255 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.256 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40711508 kB' 'MemUsed: 7356888 kB' 'SwapCached: 0 kB' 'Active: 4012400 kB' 'Inactive: 230712 kB' 'Active(anon): 3885880 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110032 kB' 'Mapped: 64816 kB' 'AnonPages: 136388 kB' 'Shmem: 3752800 kB' 'KernelStack: 11944 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404496 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 282400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.257 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.258 node0=1024 expecting 1024 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.258 00:03:13.258 real 0m4.081s 00:03:13.258 user 0m1.295s 00:03:13.258 sys 0m2.020s 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.258 11:18:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:13.258 ************************************ 00:03:13.258 END TEST default_setup 00:03:13.258 ************************************ 00:03:13.258 11:18:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.258 11:18:47 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:13.258 11:18:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.258 11:18:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.258 11:18:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.518 ************************************ 00:03:13.518 START TEST per_node_1G_alloc 00:03:13.518 ************************************ 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.518 11:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.047 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.047 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.047 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71752580 kB' 'MemAvailable: 75215336 kB' 'Buffers: 3736 kB' 'Cached: 14536116 kB' 'SwapCached: 0 kB' 'Active: 11702220 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249908 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695080 kB' 'Mapped: 202052 kB' 'Shmem: 10557548 kB' 'KReclaimable: 267692 kB' 'Slab: 905124 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637432 kB' 'KernelStack: 23104 kB' 'PageTables: 10172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.310 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71752704 kB' 'MemAvailable: 75215460 kB' 'Buffers: 3736 kB' 'Cached: 14536132 kB' 'SwapCached: 0 kB' 'Active: 11701652 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249340 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695092 kB' 'Mapped: 201980 kB' 'Shmem: 10557564 kB' 'KReclaimable: 267692 kB' 'Slab: 905052 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637360 kB' 'KernelStack: 23056 kB' 'PageTables: 9924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12687232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220104 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.311 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.312 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71753104 kB' 'MemAvailable: 75215860 kB' 'Buffers: 3736 kB' 'Cached: 14536148 kB' 'SwapCached: 0 kB' 'Active: 11701748 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249436 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695164 kB' 'Mapped: 201964 kB' 'Shmem: 10557580 kB' 'KReclaimable: 267692 kB' 'Slab: 905084 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637392 kB' 'KernelStack: 23024 kB' 'PageTables: 10192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.313 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.314 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.315 nr_hugepages=1024 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.315 resv_hugepages=0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.315 surplus_hugepages=0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.315 anon_hugepages=0 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71752616 kB' 'MemAvailable: 75215372 kB' 'Buffers: 3736 kB' 'Cached: 14536172 kB' 'SwapCached: 0 kB' 'Active: 11701256 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248944 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694712 kB' 'Mapped: 201964 kB' 'Shmem: 10557604 kB' 'KReclaimable: 267692 kB' 'Slab: 905084 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637392 kB' 'KernelStack: 23152 kB' 'PageTables: 10240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.315 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.316 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.577 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41756928 kB' 'MemUsed: 6311468 kB' 'SwapCached: 0 kB' 'Active: 4012236 kB' 'Inactive: 230712 kB' 'Active(anon): 3885716 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110088 kB' 'Mapped: 64432 kB' 'AnonPages: 135984 kB' 'Shmem: 3752856 kB' 'KernelStack: 12072 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404052 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 281956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.578 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29996684 kB' 'MemUsed: 14221524 kB' 'SwapCached: 0 kB' 'Active: 7688004 kB' 'Inactive: 3299280 kB' 'Active(anon): 7362212 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299280 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429864 kB' 'Mapped: 137356 kB' 'AnonPages: 557644 kB' 'Shmem: 6804792 kB' 'KernelStack: 10792 kB' 'PageTables: 5652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 145596 kB' 'Slab: 501032 kB' 'SReclaimable: 145596 kB' 'SUnreclaim: 355436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.579 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.580 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.581 node0=512 expecting 512 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.581 node1=512 expecting 512 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.581 00:03:16.581 real 0m3.131s 00:03:16.581 user 0m1.256s 00:03:16.581 sys 0m1.923s 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.581 11:18:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.581 ************************************ 00:03:16.581 END TEST per_node_1G_alloc 00:03:16.581 ************************************ 00:03:16.581 11:18:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.581 11:18:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:16.581 11:18:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.581 11:18:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.581 11:18:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.581 ************************************ 00:03:16.581 START TEST even_2G_alloc 00:03:16.581 ************************************ 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.581 11:18:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.874 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.874 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.874 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71737640 kB' 'MemAvailable: 75200396 kB' 'Buffers: 3736 kB' 'Cached: 14536284 kB' 'SwapCached: 0 kB' 'Active: 11702216 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249904 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695320 kB' 'Mapped: 201864 kB' 'Shmem: 10557716 kB' 'KReclaimable: 267692 kB' 'Slab: 904860 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637168 kB' 'KernelStack: 22784 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12689204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220440 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.874 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71741328 kB' 'MemAvailable: 75204084 kB' 'Buffers: 3736 kB' 'Cached: 14536288 kB' 'SwapCached: 0 kB' 'Active: 11700300 kB' 'Inactive: 3529992 kB' 'Active(anon): 11247988 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693540 kB' 'Mapped: 201792 kB' 'Shmem: 10557720 kB' 'KReclaimable: 267692 kB' 'Slab: 904976 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637284 kB' 'KernelStack: 22640 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12686372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.875 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.876 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71741328 kB' 'MemAvailable: 75204084 kB' 'Buffers: 3736 kB' 'Cached: 14536304 kB' 'SwapCached: 0 kB' 'Active: 11700320 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248008 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693540 kB' 'Mapped: 201792 kB' 'Shmem: 10557736 kB' 'KReclaimable: 267692 kB' 'Slab: 904976 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637284 kB' 'KernelStack: 22640 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12686392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.877 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.878 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.879 nr_hugepages=1024 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.879 resv_hugepages=0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.879 surplus_hugepages=0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.879 anon_hugepages=0 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71740572 kB' 'MemAvailable: 75203328 kB' 'Buffers: 3736 kB' 'Cached: 14536328 kB' 'SwapCached: 0 kB' 'Active: 11700348 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248036 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693540 kB' 'Mapped: 201792 kB' 'Shmem: 10557760 kB' 'KReclaimable: 267692 kB' 'Slab: 904976 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637284 kB' 'KernelStack: 22640 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12686416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.879 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.880 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41742592 kB' 'MemUsed: 6325804 kB' 'SwapCached: 0 kB' 'Active: 4010800 kB' 'Inactive: 230712 kB' 'Active(anon): 3884280 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110088 kB' 'Mapped: 64436 kB' 'AnonPages: 134520 kB' 'Shmem: 3752856 kB' 'KernelStack: 11896 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 403796 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 281700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.881 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.882 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29997980 kB' 'MemUsed: 14220228 kB' 'SwapCached: 0 kB' 'Active: 7689548 kB' 'Inactive: 3299280 kB' 'Active(anon): 7363756 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299280 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10429976 kB' 'Mapped: 137356 kB' 'AnonPages: 559020 kB' 'Shmem: 6804904 kB' 'KernelStack: 10744 kB' 'PageTables: 5696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 145596 kB' 'Slab: 501180 kB' 'SReclaimable: 145596 kB' 'SUnreclaim: 355584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.883 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.884 node0=512 expecting 512 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.884 node1=512 expecting 512 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.884 00:03:19.884 real 0m3.084s 00:03:19.884 user 0m1.227s 00:03:19.884 sys 0m1.902s 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.884 11:18:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.884 ************************************ 00:03:19.884 END TEST even_2G_alloc 00:03:19.884 ************************************ 00:03:19.884 11:18:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.884 11:18:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:19.884 11:18:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.884 11:18:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.884 11:18:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.884 ************************************ 00:03:19.884 START TEST odd_alloc 00:03:19.884 ************************************ 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.884 11:18:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.429 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.429 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.429 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71744872 kB' 'MemAvailable: 75207628 kB' 'Buffers: 3736 kB' 'Cached: 14536444 kB' 'SwapCached: 0 kB' 'Active: 11701100 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248788 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694220 kB' 'Mapped: 201836 kB' 'Shmem: 10557876 kB' 'KReclaimable: 267692 kB' 'Slab: 905424 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637732 kB' 'KernelStack: 22592 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12687192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.429 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.430 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71744856 kB' 'MemAvailable: 75207612 kB' 'Buffers: 3736 kB' 'Cached: 14536448 kB' 'SwapCached: 0 kB' 'Active: 11701028 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248716 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694108 kB' 'Mapped: 201820 kB' 'Shmem: 10557880 kB' 'KReclaimable: 267692 kB' 'Slab: 905384 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637692 kB' 'KernelStack: 22640 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12687212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.431 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.432 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71745004 kB' 'MemAvailable: 75207760 kB' 'Buffers: 3736 kB' 'Cached: 14536464 kB' 'SwapCached: 0 kB' 'Active: 11701044 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248732 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694104 kB' 'Mapped: 201820 kB' 'Shmem: 10557896 kB' 'KReclaimable: 267692 kB' 'Slab: 905384 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637692 kB' 'KernelStack: 22640 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12687232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.695 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.696 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:22.697 nr_hugepages=1025 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.697 resv_hugepages=0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.697 surplus_hugepages=0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.697 anon_hugepages=0 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71745396 kB' 'MemAvailable: 75208152 kB' 'Buffers: 3736 kB' 'Cached: 14536504 kB' 'SwapCached: 0 kB' 'Active: 11700716 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248404 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693724 kB' 'Mapped: 201820 kB' 'Shmem: 10557936 kB' 'KReclaimable: 267692 kB' 'Slab: 905384 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637692 kB' 'KernelStack: 22624 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12687252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.697 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.698 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41756060 kB' 'MemUsed: 6312336 kB' 'SwapCached: 0 kB' 'Active: 4011424 kB' 'Inactive: 230712 kB' 'Active(anon): 3884904 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110100 kB' 'Mapped: 64448 kB' 'AnonPages: 135192 kB' 'Shmem: 3752868 kB' 'KernelStack: 11880 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404212 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 282116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.699 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29992304 kB' 'MemUsed: 14225904 kB' 'SwapCached: 0 kB' 'Active: 7689980 kB' 'Inactive: 3299280 kB' 'Active(anon): 7364188 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299280 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10430160 kB' 'Mapped: 137372 kB' 'AnonPages: 559232 kB' 'Shmem: 6805088 kB' 'KernelStack: 10776 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 145596 kB' 'Slab: 501172 kB' 'SReclaimable: 145596 kB' 'SUnreclaim: 355576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.700 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:22.701 node0=512 expecting 513 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:22.701 node1=513 expecting 512 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:22.701 00:03:22.701 real 0m2.935s 00:03:22.701 user 0m1.094s 00:03:22.701 sys 0m1.871s 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.701 11:18:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.701 ************************************ 00:03:22.701 END TEST odd_alloc 00:03:22.701 ************************************ 00:03:22.701 11:18:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.701 11:18:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:22.701 11:18:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.701 11:18:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.701 11:18:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.701 ************************************ 00:03:22.701 START TEST custom_alloc 00:03:22.701 ************************************ 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:22.701 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.702 11:18:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.003 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.003 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.003 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.003 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.003 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.003 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.003 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.004 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70694116 kB' 'MemAvailable: 74156872 kB' 'Buffers: 3736 kB' 'Cached: 14536592 kB' 'SwapCached: 0 kB' 'Active: 11701744 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249432 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694692 kB' 'Mapped: 201824 kB' 'Shmem: 10558024 kB' 'KReclaimable: 267692 kB' 'Slab: 905348 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637656 kB' 'KernelStack: 22640 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12687736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.004 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70696368 kB' 'MemAvailable: 74159124 kB' 'Buffers: 3736 kB' 'Cached: 14536596 kB' 'SwapCached: 0 kB' 'Active: 11701960 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249648 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695080 kB' 'Mapped: 201820 kB' 'Shmem: 10558028 kB' 'KReclaimable: 267692 kB' 'Slab: 905392 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637700 kB' 'KernelStack: 22640 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12687756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.005 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.006 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70694532 kB' 'MemAvailable: 74157288 kB' 'Buffers: 3736 kB' 'Cached: 14536612 kB' 'SwapCached: 0 kB' 'Active: 11701900 kB' 'Inactive: 3529992 kB' 'Active(anon): 11249588 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695056 kB' 'Mapped: 201816 kB' 'Shmem: 10558044 kB' 'KReclaimable: 267692 kB' 'Slab: 905392 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637700 kB' 'KernelStack: 22640 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12687408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.007 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.008 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.009 nr_hugepages=1536 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.009 resv_hugepages=0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.009 surplus_hugepages=0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.009 anon_hugepages=0 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70695016 kB' 'MemAvailable: 74157772 kB' 'Buffers: 3736 kB' 'Cached: 14536616 kB' 'SwapCached: 0 kB' 'Active: 11701232 kB' 'Inactive: 3529992 kB' 'Active(anon): 11248920 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693812 kB' 'Mapped: 201816 kB' 'Shmem: 10558048 kB' 'KReclaimable: 267692 kB' 'Slab: 905392 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637700 kB' 'KernelStack: 22576 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12687428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.009 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.010 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41767644 kB' 'MemUsed: 6300752 kB' 'SwapCached: 0 kB' 'Active: 4010732 kB' 'Inactive: 230712 kB' 'Active(anon): 3884212 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110100 kB' 'Mapped: 64460 kB' 'AnonPages: 134520 kB' 'Shmem: 3752868 kB' 'KernelStack: 11864 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404320 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 282224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.011 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 28927120 kB' 'MemUsed: 15291088 kB' 'SwapCached: 0 kB' 'Active: 7690380 kB' 'Inactive: 3299280 kB' 'Active(anon): 7364588 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299280 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10430312 kB' 'Mapped: 137356 kB' 'AnonPages: 559628 kB' 'Shmem: 6805240 kB' 'KernelStack: 10712 kB' 'PageTables: 5488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 145596 kB' 'Slab: 501072 kB' 'SReclaimable: 145596 kB' 'SUnreclaim: 355476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.012 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.013 node0=512 expecting 512 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.013 node1=1024 expecting 1024 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.013 00:03:26.013 real 0m3.126s 00:03:26.013 user 0m1.287s 00:03:26.013 sys 0m1.883s 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.013 11:19:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.013 ************************************ 00:03:26.013 END TEST custom_alloc 00:03:26.013 ************************************ 00:03:26.013 11:19:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.013 11:19:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.013 11:19:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.013 11:19:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.013 11:19:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.014 ************************************ 00:03:26.014 START TEST no_shrink_alloc 00:03:26.014 ************************************ 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.014 11:19:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.551 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.551 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.551 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.815 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.815 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.815 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.815 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71722060 kB' 'MemAvailable: 75184816 kB' 'Buffers: 3736 kB' 'Cached: 14536752 kB' 'SwapCached: 0 kB' 'Active: 11702792 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250480 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695576 kB' 'Mapped: 201844 kB' 'Shmem: 10558184 kB' 'KReclaimable: 267692 kB' 'Slab: 905520 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637828 kB' 'KernelStack: 22688 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220120 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.815 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.816 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71723044 kB' 'MemAvailable: 75185800 kB' 'Buffers: 3736 kB' 'Cached: 14536756 kB' 'SwapCached: 0 kB' 'Active: 11702528 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250216 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695296 kB' 'Mapped: 201828 kB' 'Shmem: 10558188 kB' 'KReclaimable: 267692 kB' 'Slab: 905556 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 22704 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220088 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.817 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71722044 kB' 'MemAvailable: 75184800 kB' 'Buffers: 3736 kB' 'Cached: 14536772 kB' 'SwapCached: 0 kB' 'Active: 11702668 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250356 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695448 kB' 'Mapped: 201828 kB' 'Shmem: 10558204 kB' 'KReclaimable: 267692 kB' 'Slab: 905556 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 22704 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220088 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.818 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.819 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.820 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.080 nr_hugepages=1024 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.080 resv_hugepages=0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.080 surplus_hugepages=0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.080 anon_hugepages=0 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71721584 kB' 'MemAvailable: 75184340 kB' 'Buffers: 3736 kB' 'Cached: 14536772 kB' 'SwapCached: 0 kB' 'Active: 11702356 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250044 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695144 kB' 'Mapped: 201828 kB' 'Shmem: 10558204 kB' 'KReclaimable: 267692 kB' 'Slab: 905556 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 22704 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12688672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220088 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.080 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40715880 kB' 'MemUsed: 7352516 kB' 'SwapCached: 0 kB' 'Active: 4013484 kB' 'Inactive: 230712 kB' 'Active(anon): 3886964 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110180 kB' 'Mapped: 64472 kB' 'AnonPages: 137152 kB' 'Shmem: 3752948 kB' 'KernelStack: 11880 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404444 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 282348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.081 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.082 node0=1024 expecting 1024 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.082 11:19:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.375 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.375 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.375 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.375 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71741260 kB' 'MemAvailable: 75204016 kB' 'Buffers: 3736 kB' 'Cached: 14536880 kB' 'SwapCached: 0 kB' 'Active: 11706284 kB' 'Inactive: 3529992 kB' 'Active(anon): 11253972 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 698888 kB' 'Mapped: 202344 kB' 'Shmem: 10558312 kB' 'KReclaimable: 267692 kB' 'Slab: 905668 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 637976 kB' 'KernelStack: 22592 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12693416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220040 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71746692 kB' 'MemAvailable: 75209448 kB' 'Buffers: 3736 kB' 'Cached: 14536884 kB' 'SwapCached: 0 kB' 'Active: 11702684 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250372 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695808 kB' 'Mapped: 202112 kB' 'Shmem: 10558316 kB' 'KReclaimable: 267692 kB' 'Slab: 905732 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638040 kB' 'KernelStack: 22624 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12689932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220008 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71746560 kB' 'MemAvailable: 75209316 kB' 'Buffers: 3736 kB' 'Cached: 14536916 kB' 'SwapCached: 0 kB' 'Active: 11703260 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250948 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695976 kB' 'Mapped: 201832 kB' 'Shmem: 10558348 kB' 'KReclaimable: 267692 kB' 'Slab: 905732 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638040 kB' 'KernelStack: 22656 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12689304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220040 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.379 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.380 nr_hugepages=1024 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.380 resv_hugepages=0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.380 surplus_hugepages=0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.380 anon_hugepages=0 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71745272 kB' 'MemAvailable: 75208028 kB' 'Buffers: 3736 kB' 'Cached: 14536956 kB' 'SwapCached: 0 kB' 'Active: 11702880 kB' 'Inactive: 3529992 kB' 'Active(anon): 11250568 kB' 'Inactive(anon): 0 kB' 'Active(file): 452312 kB' 'Inactive(file): 3529992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 695476 kB' 'Mapped: 201832 kB' 'Shmem: 10558388 kB' 'KReclaimable: 267692 kB' 'Slab: 905732 kB' 'SReclaimable: 267692 kB' 'SUnreclaim: 638040 kB' 'KernelStack: 22640 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12689324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220040 kB' 'VmallocChunk: 0 kB' 'Percpu: 93632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3507156 kB' 'DirectMap2M: 30775296 kB' 'DirectMap1G: 67108864 kB' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.380 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.381 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40725152 kB' 'MemUsed: 7343244 kB' 'SwapCached: 0 kB' 'Active: 4013624 kB' 'Inactive: 230712 kB' 'Active(anon): 3887104 kB' 'Inactive(anon): 0 kB' 'Active(file): 126520 kB' 'Inactive(file): 230712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4110316 kB' 'Mapped: 64476 kB' 'AnonPages: 137212 kB' 'Shmem: 3753084 kB' 'KernelStack: 11912 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122096 kB' 'Slab: 404464 kB' 'SReclaimable: 122096 kB' 'SUnreclaim: 282368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.382 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.383 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.384 node0=1024 expecting 1024 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.384 00:03:32.384 real 0m6.226s 00:03:32.384 user 0m2.503s 00:03:32.384 sys 0m3.803s 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.384 11:19:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.384 ************************************ 00:03:32.384 END TEST no_shrink_alloc 00:03:32.384 ************************************ 00:03:32.384 11:19:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.384 11:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.384 00:03:32.384 real 0m23.180s 00:03:32.384 user 0m8.938s 00:03:32.384 sys 0m13.762s 00:03:32.384 11:19:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.384 11:19:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.384 ************************************ 00:03:32.384 END TEST hugepages 00:03:32.384 ************************************ 00:03:32.384 11:19:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.384 11:19:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.384 11:19:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.384 11:19:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.384 11:19:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.384 ************************************ 00:03:32.384 START TEST driver 00:03:32.384 ************************************ 00:03:32.384 11:19:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.384 * Looking for test storage... 00:03:32.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.384 11:19:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:32.384 11:19:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.384 11:19:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.587 11:19:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.587 11:19:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.587 11:19:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.587 11:19:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.587 ************************************ 00:03:36.587 START TEST guess_driver 00:03:36.587 ************************************ 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.587 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.587 Looking for driver=vfio-pci 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.587 11:19:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.874 11:19:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.441 11:19:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.712 00:03:44.712 real 0m8.174s 00:03:44.712 user 0m2.326s 00:03:44.712 sys 0m4.212s 00:03:44.712 11:19:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.712 11:19:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:44.712 ************************************ 00:03:44.712 END TEST guess_driver 00:03:44.712 ************************************ 00:03:44.712 11:19:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:44.712 00:03:44.712 real 0m12.506s 00:03:44.712 user 0m3.531s 00:03:44.712 sys 0m6.500s 00:03:44.712 11:19:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.712 11:19:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:44.712 ************************************ 00:03:44.712 END TEST driver 00:03:44.712 ************************************ 00:03:44.712 11:19:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:44.712 11:19:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:44.712 11:19:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.712 11:19:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.712 11:19:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.972 ************************************ 00:03:44.972 START TEST devices 00:03:44.972 ************************************ 00:03:44.972 11:19:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:44.972 * Looking for test storage... 00:03:44.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.972 11:19:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:44.972 11:19:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:44.972 11:19:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.972 11:19:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:48.265 11:19:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:48.265 No valid GPT data, bailing 00:03:48.265 11:19:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:48.265 11:19:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:48.265 11:19:22 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:48.265 11:19:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.265 11:19:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:48.265 ************************************ 00:03:48.265 START TEST nvme_mount 00:03:48.265 ************************************ 00:03:48.265 11:19:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:48.265 11:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:48.266 11:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:49.203 Creating new GPT entries in memory. 00:03:49.204 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:49.204 other utilities. 00:03:49.204 11:19:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:49.204 11:19:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.204 11:19:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.204 11:19:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.204 11:19:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:50.583 Creating new GPT entries in memory. 00:03:50.583 The operation has completed successfully. 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2569736 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.583 11:19:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.121 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.122 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.122 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.381 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:53.381 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:53.381 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.381 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.381 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:53.381 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:53.381 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.381 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.381 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.641 11:19:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:56.178 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.437 11:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.724 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.724 00:03:59.724 real 0m11.115s 00:03:59.724 user 0m3.280s 00:03:59.724 sys 0m5.677s 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.724 11:19:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 ************************************ 00:03:59.724 END TEST nvme_mount 00:03:59.724 ************************************ 00:03:59.724 11:19:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:59.724 11:19:33 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:59.724 11:19:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.724 11:19:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.724 11:19:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.724 ************************************ 00:03:59.724 START TEST dm_mount 00:03:59.724 ************************************ 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.724 11:19:33 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.660 Creating new GPT entries in memory. 00:04:00.660 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.660 other utilities. 00:04:00.660 11:19:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.660 11:19:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.660 11:19:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.660 11:19:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.660 11:19:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.597 Creating new GPT entries in memory. 00:04:01.597 The operation has completed successfully. 00:04:01.597 11:19:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.597 11:19:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.597 11:19:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.597 11:19:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.597 11:19:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:02.534 The operation has completed successfully. 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2573946 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.534 11:19:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.825 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.826 11:19:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:08.359 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:08.359 00:04:08.359 real 0m9.023s 00:04:08.359 user 0m2.164s 00:04:08.359 sys 0m3.879s 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.359 11:19:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:08.359 ************************************ 00:04:08.359 END TEST dm_mount 00:04:08.359 ************************************ 00:04:08.618 11:19:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.618 11:19:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.876 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:08.876 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:08.876 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.876 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.876 11:19:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:08.876 00:04:08.876 real 0m23.930s 00:04:08.876 user 0m6.747s 00:04:08.876 sys 0m11.922s 00:04:08.876 11:19:43 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.876 11:19:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:08.876 ************************************ 00:04:08.876 END TEST devices 00:04:08.876 ************************************ 00:04:08.876 11:19:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.876 00:04:08.876 real 1m20.799s 00:04:08.876 user 0m26.272s 00:04:08.876 sys 0m44.886s 00:04:08.876 11:19:43 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.876 11:19:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.876 ************************************ 00:04:08.876 END TEST setup.sh 00:04:08.876 ************************************ 00:04:08.876 11:19:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:08.876 11:19:43 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:11.407 Hugepages 00:04:11.407 node hugesize free / total 00:04:11.702 node0 1048576kB 0 / 0 00:04:11.702 node0 2048kB 2048 / 2048 00:04:11.702 node1 1048576kB 0 / 0 00:04:11.702 node1 2048kB 0 / 0 00:04:11.702 00:04:11.702 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.702 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:11.702 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:11.702 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:11.702 11:19:46 -- spdk/autotest.sh@130 -- # uname -s 00:04:11.702 11:19:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:11.702 11:19:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:11.702 11:19:46 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.991 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.991 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.560 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.819 11:19:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:16.756 11:19:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:16.756 11:19:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:16.756 11:19:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.756 11:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:16.756 11:19:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:16.756 11:19:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:16.756 11:19:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.756 11:19:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.756 11:19:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:16.756 11:19:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:16.756 11:19:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:16.756 11:19:51 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.045 Waiting for block devices as requested 00:04:20.045 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.045 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:20.045 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:20.045 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.045 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.045 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.045 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.304 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.304 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.304 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:20.563 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:20.563 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.563 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.563 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.822 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.822 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.822 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:21.081 11:19:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:21.081 11:19:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1502 -- # grep 0000:86:00.0/nvme/nvme 00:04:21.081 11:19:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:21.081 11:19:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:21.081 11:19:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:21.081 11:19:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:21.081 11:19:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:21.081 11:19:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:21.081 11:19:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:21.081 11:19:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:21.081 11:19:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:21.081 11:19:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:21.081 11:19:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:21.081 11:19:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:21.081 11:19:55 -- common/autotest_common.sh@1557 -- # continue 00:04:21.081 11:19:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:21.081 11:19:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.081 11:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:21.081 11:19:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:21.081 11:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.081 11:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:21.081 11:19:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.370 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.370 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.938 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.199 11:19:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.199 11:19:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.199 11:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:25.199 11:19:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.199 11:19:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:25.199 11:19:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.199 11:19:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:25.199 11:19:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:25.199 11:19:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:25.199 11:19:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:25.199 11:19:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:25.199 11:19:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.199 11:19:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.199 11:19:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:25.199 11:19:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:25.199 11:19:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:25.199 11:19:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:25.199 11:19:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:25.199 11:19:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:25.199 11:19:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:25.199 11:19:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:25.199 11:19:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:86:00.0 00:04:25.199 11:19:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:86:00.0 ]] 00:04:25.199 11:19:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2583281 00:04:25.199 11:19:59 -- common/autotest_common.sh@1598 -- # waitforlisten 2583281 00:04:25.199 11:19:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.199 11:19:59 -- common/autotest_common.sh@829 -- # '[' -z 2583281 ']' 00:04:25.199 11:19:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.199 11:19:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.199 11:19:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.199 11:19:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.199 11:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:25.199 [2024-07-15 11:19:59.622235] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:25.199 [2024-07-15 11:19:59.622309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583281 ] 00:04:25.199 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.458 [2024-07-15 11:19:59.702399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.458 [2024-07-15 11:19:59.793345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.717 11:20:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.717 11:20:00 -- common/autotest_common.sh@862 -- # return 0 00:04:25.717 11:20:00 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:25.717 11:20:00 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:25.717 11:20:00 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:29.004 nvme0n1 00:04:29.004 11:20:03 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.004 [2024-07-15 11:20:03.329911] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:29.004 request: 00:04:29.004 { 00:04:29.004 "nvme_ctrlr_name": "nvme0", 00:04:29.004 "password": "test", 00:04:29.004 "method": "bdev_nvme_opal_revert", 00:04:29.004 "req_id": 1 00:04:29.004 } 00:04:29.004 Got JSON-RPC error response 00:04:29.004 response: 00:04:29.004 { 00:04:29.004 "code": -32602, 00:04:29.004 "message": "Invalid parameters" 00:04:29.004 } 00:04:29.004 11:20:03 -- common/autotest_common.sh@1604 -- # true 00:04:29.004 11:20:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:29.004 11:20:03 -- common/autotest_common.sh@1608 -- # killprocess 2583281 00:04:29.004 11:20:03 -- common/autotest_common.sh@948 -- # '[' -z 2583281 ']' 00:04:29.004 11:20:03 -- common/autotest_common.sh@952 -- # kill -0 2583281 00:04:29.004 11:20:03 -- common/autotest_common.sh@953 -- # uname 00:04:29.004 11:20:03 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.004 11:20:03 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2583281 00:04:29.004 11:20:03 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.004 11:20:03 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.004 11:20:03 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2583281' 00:04:29.004 killing process with pid 2583281 00:04:29.004 11:20:03 -- common/autotest_common.sh@967 -- # kill 2583281 00:04:29.004 11:20:03 -- common/autotest_common.sh@972 -- # wait 2583281 00:04:30.921 11:20:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:30.921 11:20:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:30.921 11:20:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.921 11:20:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.921 11:20:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:30.921 11:20:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.921 11:20:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 11:20:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:30.921 11:20:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.921 11:20:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.921 11:20:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.921 11:20:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 ************************************ 00:04:30.921 START TEST env 00:04:30.921 ************************************ 00:04:30.921 11:20:05 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.921 * Looking for test storage... 00:04:30.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:30.921 11:20:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.921 11:20:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.921 11:20:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.921 11:20:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.921 ************************************ 00:04:30.921 START TEST env_memory 00:04:30.921 ************************************ 00:04:30.921 11:20:05 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.921 00:04:30.921 00:04:30.921 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.921 http://cunit.sourceforge.net/ 00:04:30.921 00:04:30.921 00:04:30.921 Suite: memory 00:04:30.921 Test: alloc and free memory map ...[2024-07-15 11:20:05.289092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.921 passed 00:04:30.921 Test: mem map translation ...[2024-07-15 11:20:05.318222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.921 [2024-07-15 11:20:05.318241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.921 [2024-07-15 11:20:05.318302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.921 [2024-07-15 11:20:05.318312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.921 passed 00:04:30.921 Test: mem map registration ...[2024-07-15 11:20:05.378178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.921 [2024-07-15 11:20:05.378202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:31.248 passed 00:04:31.248 Test: mem map adjacent registrations ...passed 00:04:31.248 00:04:31.248 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.248 suites 1 1 n/a 0 0 00:04:31.248 tests 4 4 4 0 0 00:04:31.248 asserts 152 152 152 0 n/a 00:04:31.248 00:04:31.248 Elapsed time = 0.203 seconds 00:04:31.248 00:04:31.248 real 0m0.217s 00:04:31.248 user 0m0.209s 00:04:31.248 sys 0m0.007s 00:04:31.248 11:20:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.248 11:20:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.248 ************************************ 00:04:31.248 END TEST env_memory 00:04:31.248 ************************************ 00:04:31.248 11:20:05 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.248 11:20:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.248 11:20:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.248 11:20:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.248 11:20:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.248 ************************************ 00:04:31.248 START TEST env_vtophys 00:04:31.248 ************************************ 00:04:31.248 11:20:05 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.248 EAL: lib.eal log level changed from notice to debug 00:04:31.248 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.248 EAL: Detected lcore 1 as core 1 on socket 0 00:04:31.248 EAL: Detected lcore 2 as core 2 on socket 0 00:04:31.248 EAL: Detected lcore 3 as core 3 on socket 0 00:04:31.248 EAL: Detected lcore 4 as core 4 on socket 0 00:04:31.248 EAL: Detected lcore 5 as core 5 on socket 0 00:04:31.248 EAL: Detected lcore 6 as core 6 on socket 0 00:04:31.248 EAL: Detected lcore 7 as core 8 on socket 0 00:04:31.248 EAL: Detected lcore 8 as core 9 on socket 0 00:04:31.248 EAL: Detected lcore 9 as core 10 on socket 0 00:04:31.248 EAL: Detected lcore 10 as core 11 on socket 0 00:04:31.248 EAL: Detected lcore 11 as core 12 on socket 0 00:04:31.248 EAL: Detected lcore 12 as core 13 on socket 0 00:04:31.248 EAL: Detected lcore 13 as core 14 on socket 0 00:04:31.248 EAL: Detected lcore 14 as core 16 on socket 0 00:04:31.248 EAL: Detected lcore 15 as core 17 on socket 0 00:04:31.248 EAL: Detected lcore 16 as core 18 on socket 0 00:04:31.248 EAL: Detected lcore 17 as core 19 on socket 0 00:04:31.248 EAL: Detected lcore 18 as core 20 on socket 0 00:04:31.248 EAL: Detected lcore 19 as core 21 on socket 0 00:04:31.248 EAL: Detected lcore 20 as core 22 on socket 0 00:04:31.248 EAL: Detected lcore 21 as core 24 on socket 0 00:04:31.248 EAL: Detected lcore 22 as core 25 on socket 0 00:04:31.248 EAL: Detected lcore 23 as core 26 on socket 0 00:04:31.248 EAL: Detected lcore 24 as core 27 on socket 0 00:04:31.248 EAL: Detected lcore 25 as core 28 on socket 0 00:04:31.248 EAL: Detected lcore 26 as core 29 on socket 0 00:04:31.248 EAL: Detected lcore 27 as core 30 on socket 0 00:04:31.248 EAL: Detected lcore 28 as core 0 on socket 1 00:04:31.248 EAL: Detected lcore 29 as core 1 on socket 1 00:04:31.248 EAL: Detected lcore 30 as core 2 on socket 1 00:04:31.248 EAL: Detected lcore 31 as core 3 on socket 1 00:04:31.248 EAL: Detected lcore 32 as core 4 on socket 1 00:04:31.248 EAL: Detected lcore 33 as core 5 on socket 1 00:04:31.248 EAL: Detected lcore 34 as core 6 on socket 1 00:04:31.248 EAL: Detected lcore 35 as core 8 on socket 1 00:04:31.248 EAL: Detected lcore 36 as core 9 on socket 1 00:04:31.248 EAL: Detected lcore 37 as core 10 on socket 1 00:04:31.248 EAL: Detected lcore 38 as core 11 on socket 1 00:04:31.248 EAL: Detected lcore 39 as core 12 on socket 1 00:04:31.248 EAL: Detected lcore 40 as core 13 on socket 1 00:04:31.248 EAL: Detected lcore 41 as core 14 on socket 1 00:04:31.248 EAL: Detected lcore 42 as core 16 on socket 1 00:04:31.248 EAL: Detected lcore 43 as core 17 on socket 1 00:04:31.248 EAL: Detected lcore 44 as core 18 on socket 1 00:04:31.248 EAL: Detected lcore 45 as core 19 on socket 1 00:04:31.248 EAL: Detected lcore 46 as core 20 on socket 1 00:04:31.248 EAL: Detected lcore 47 as core 21 on socket 1 00:04:31.248 EAL: Detected lcore 48 as core 22 on socket 1 00:04:31.248 EAL: Detected lcore 49 as core 24 on socket 1 00:04:31.248 EAL: Detected lcore 50 as core 25 on socket 1 00:04:31.248 EAL: Detected lcore 51 as core 26 on socket 1 00:04:31.248 EAL: Detected lcore 52 as core 27 on socket 1 00:04:31.248 EAL: Detected lcore 53 as core 28 on socket 1 00:04:31.248 EAL: Detected lcore 54 as core 29 on socket 1 00:04:31.248 EAL: Detected lcore 55 as core 30 on socket 1 00:04:31.248 EAL: Detected lcore 56 as core 0 on socket 0 00:04:31.248 EAL: Detected lcore 57 as core 1 on socket 0 00:04:31.248 EAL: Detected lcore 58 as core 2 on socket 0 00:04:31.248 EAL: Detected lcore 59 as core 3 on socket 0 00:04:31.248 EAL: Detected lcore 60 as core 4 on socket 0 00:04:31.248 EAL: Detected lcore 61 as core 5 on socket 0 00:04:31.248 EAL: Detected lcore 62 as core 6 on socket 0 00:04:31.248 EAL: Detected lcore 63 as core 8 on socket 0 00:04:31.248 EAL: Detected lcore 64 as core 9 on socket 0 00:04:31.248 EAL: Detected lcore 65 as core 10 on socket 0 00:04:31.248 EAL: Detected lcore 66 as core 11 on socket 0 00:04:31.248 EAL: Detected lcore 67 as core 12 on socket 0 00:04:31.248 EAL: Detected lcore 68 as core 13 on socket 0 00:04:31.248 EAL: Detected lcore 69 as core 14 on socket 0 00:04:31.248 EAL: Detected lcore 70 as core 16 on socket 0 00:04:31.248 EAL: Detected lcore 71 as core 17 on socket 0 00:04:31.248 EAL: Detected lcore 72 as core 18 on socket 0 00:04:31.248 EAL: Detected lcore 73 as core 19 on socket 0 00:04:31.248 EAL: Detected lcore 74 as core 20 on socket 0 00:04:31.248 EAL: Detected lcore 75 as core 21 on socket 0 00:04:31.248 EAL: Detected lcore 76 as core 22 on socket 0 00:04:31.248 EAL: Detected lcore 77 as core 24 on socket 0 00:04:31.248 EAL: Detected lcore 78 as core 25 on socket 0 00:04:31.248 EAL: Detected lcore 79 as core 26 on socket 0 00:04:31.248 EAL: Detected lcore 80 as core 27 on socket 0 00:04:31.248 EAL: Detected lcore 81 as core 28 on socket 0 00:04:31.248 EAL: Detected lcore 82 as core 29 on socket 0 00:04:31.248 EAL: Detected lcore 83 as core 30 on socket 0 00:04:31.248 EAL: Detected lcore 84 as core 0 on socket 1 00:04:31.248 EAL: Detected lcore 85 as core 1 on socket 1 00:04:31.248 EAL: Detected lcore 86 as core 2 on socket 1 00:04:31.248 EAL: Detected lcore 87 as core 3 on socket 1 00:04:31.248 EAL: Detected lcore 88 as core 4 on socket 1 00:04:31.248 EAL: Detected lcore 89 as core 5 on socket 1 00:04:31.248 EAL: Detected lcore 90 as core 6 on socket 1 00:04:31.248 EAL: Detected lcore 91 as core 8 on socket 1 00:04:31.248 EAL: Detected lcore 92 as core 9 on socket 1 00:04:31.248 EAL: Detected lcore 93 as core 10 on socket 1 00:04:31.248 EAL: Detected lcore 94 as core 11 on socket 1 00:04:31.248 EAL: Detected lcore 95 as core 12 on socket 1 00:04:31.248 EAL: Detected lcore 96 as core 13 on socket 1 00:04:31.248 EAL: Detected lcore 97 as core 14 on socket 1 00:04:31.248 EAL: Detected lcore 98 as core 16 on socket 1 00:04:31.248 EAL: Detected lcore 99 as core 17 on socket 1 00:04:31.248 EAL: Detected lcore 100 as core 18 on socket 1 00:04:31.248 EAL: Detected lcore 101 as core 19 on socket 1 00:04:31.248 EAL: Detected lcore 102 as core 20 on socket 1 00:04:31.248 EAL: Detected lcore 103 as core 21 on socket 1 00:04:31.248 EAL: Detected lcore 104 as core 22 on socket 1 00:04:31.248 EAL: Detected lcore 105 as core 24 on socket 1 00:04:31.248 EAL: Detected lcore 106 as core 25 on socket 1 00:04:31.248 EAL: Detected lcore 107 as core 26 on socket 1 00:04:31.248 EAL: Detected lcore 108 as core 27 on socket 1 00:04:31.248 EAL: Detected lcore 109 as core 28 on socket 1 00:04:31.248 EAL: Detected lcore 110 as core 29 on socket 1 00:04:31.248 EAL: Detected lcore 111 as core 30 on socket 1 00:04:31.248 EAL: Maximum logical cores by configuration: 128 00:04:31.248 EAL: Detected CPU lcores: 112 00:04:31.248 EAL: Detected NUMA nodes: 2 00:04:31.248 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:31.248 EAL: Detected shared linkage of DPDK 00:04:31.248 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.248 EAL: Bus pci wants IOVA as 'DC' 00:04:31.248 EAL: Buses did not request a specific IOVA mode. 00:04:31.248 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:31.248 EAL: Selected IOVA mode 'VA' 00:04:31.248 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.248 EAL: Probing VFIO support... 00:04:31.248 EAL: IOMMU type 1 (Type 1) is supported 00:04:31.248 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:31.248 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:31.248 EAL: VFIO support initialized 00:04:31.248 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.248 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.248 EAL: Setting up physically contiguous memory... 00:04:31.248 EAL: Setting maximum number of open files to 524288 00:04:31.248 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.248 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:31.249 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.249 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:31.249 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.249 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:31.249 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.249 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.249 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:31.249 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:31.249 EAL: Hugepages will be freed exactly as allocated. 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: TSC frequency is ~2200000 KHz 00:04:31.249 EAL: Main lcore 0 is ready (tid=7f90146dfa00;cpuset=[0]) 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 0 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:31.249 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.249 00:04:31.249 00:04:31.249 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.249 http://cunit.sourceforge.net/ 00:04:31.249 00:04:31.249 00:04:31.249 Suite: components_suite 00:04:31.249 Test: vtophys_malloc_test ...passed 00:04:31.249 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.249 EAL: Trying to obtain current memory policy. 00:04:31.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.249 EAL: Restoring previous memory policy: 4 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.249 EAL: request: mp_malloc_sync 00:04:31.249 EAL: No shared files mode enabled, IPC is disabled 00:04:31.249 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.537 EAL: Trying to obtain current memory policy. 00:04:31.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.537 EAL: Restoring previous memory policy: 4 00:04:31.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.537 EAL: Trying to obtain current memory policy. 00:04:31.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.537 EAL: Restoring previous memory policy: 4 00:04:31.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.537 EAL: Trying to obtain current memory policy. 00:04:31.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.537 EAL: Restoring previous memory policy: 4 00:04:31.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.537 EAL: request: mp_malloc_sync 00:04:31.537 EAL: No shared files mode enabled, IPC is disabled 00:04:31.537 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.794 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.794 EAL: request: mp_malloc_sync 00:04:31.794 EAL: No shared files mode enabled, IPC is disabled 00:04:31.794 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.794 EAL: Trying to obtain current memory policy. 00:04:31.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.052 EAL: Restoring previous memory policy: 4 00:04:32.052 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.052 EAL: request: mp_malloc_sync 00:04:32.052 EAL: No shared files mode enabled, IPC is disabled 00:04:32.052 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.052 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.310 EAL: request: mp_malloc_sync 00:04:32.310 EAL: No shared files mode enabled, IPC is disabled 00:04:32.310 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.310 passed 00:04:32.310 00:04:32.310 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.310 suites 1 1 n/a 0 0 00:04:32.310 tests 2 2 2 0 0 00:04:32.310 asserts 497 497 497 0 n/a 00:04:32.310 00:04:32.310 Elapsed time = 1.022 seconds 00:04:32.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.310 EAL: request: mp_malloc_sync 00:04:32.310 EAL: No shared files mode enabled, IPC is disabled 00:04:32.310 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.310 EAL: No shared files mode enabled, IPC is disabled 00:04:32.310 EAL: No shared files mode enabled, IPC is disabled 00:04:32.310 EAL: No shared files mode enabled, IPC is disabled 00:04:32.310 00:04:32.310 real 0m1.157s 00:04:32.310 user 0m0.667s 00:04:32.310 sys 0m0.461s 00:04:32.310 11:20:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.310 11:20:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.310 ************************************ 00:04:32.310 END TEST env_vtophys 00:04:32.310 ************************************ 00:04:32.310 11:20:06 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.310 11:20:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.310 11:20:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.310 11:20:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.310 11:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.310 ************************************ 00:04:32.310 START TEST env_pci 00:04:32.310 ************************************ 00:04:32.310 11:20:06 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.310 00:04:32.310 00:04:32.310 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.310 http://cunit.sourceforge.net/ 00:04:32.310 00:04:32.310 00:04:32.310 Suite: pci 00:04:32.310 Test: pci_hook ...[2024-07-15 11:20:06.773199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2584671 has claimed it 00:04:32.569 EAL: Cannot find device (10000:00:01.0) 00:04:32.569 EAL: Failed to attach device on primary process 00:04:32.569 passed 00:04:32.569 00:04:32.569 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.569 suites 1 1 n/a 0 0 00:04:32.569 tests 1 1 1 0 0 00:04:32.569 asserts 25 25 25 0 n/a 00:04:32.569 00:04:32.569 Elapsed time = 0.049 seconds 00:04:32.569 00:04:32.569 real 0m0.071s 00:04:32.569 user 0m0.019s 00:04:32.569 sys 0m0.051s 00:04:32.569 11:20:06 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.569 11:20:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.569 ************************************ 00:04:32.569 END TEST env_pci 00:04:32.569 ************************************ 00:04:32.569 11:20:06 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.569 11:20:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.569 11:20:06 env -- env/env.sh@15 -- # uname 00:04:32.569 11:20:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.569 11:20:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.569 11:20:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.569 11:20:06 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:32.569 11:20:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.569 11:20:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.569 ************************************ 00:04:32.569 START TEST env_dpdk_post_init 00:04:32.569 ************************************ 00:04:32.569 11:20:06 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.569 EAL: Detected CPU lcores: 112 00:04:32.569 EAL: Detected NUMA nodes: 2 00:04:32.569 EAL: Detected shared linkage of DPDK 00:04:32.569 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.569 EAL: Selected IOVA mode 'VA' 00:04:32.569 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.569 EAL: VFIO support initialized 00:04:32.569 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.828 EAL: Using IOMMU type 1 (Type 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.828 EAL: Ignore mapping IO port bar(1) 00:04:32.828 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:33.760 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:04:37.079 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:04:37.079 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:04:37.079 Starting DPDK initialization... 00:04:37.079 Starting SPDK post initialization... 00:04:37.079 SPDK NVMe probe 00:04:37.079 Attaching to 0000:86:00.0 00:04:37.079 Attached to 0000:86:00.0 00:04:37.079 Cleaning up... 00:04:37.079 00:04:37.079 real 0m4.501s 00:04:37.079 user 0m3.380s 00:04:37.079 sys 0m0.172s 00:04:37.079 11:20:11 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.079 11:20:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.079 ************************************ 00:04:37.079 END TEST env_dpdk_post_init 00:04:37.079 ************************************ 00:04:37.079 11:20:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.079 11:20:11 env -- env/env.sh@26 -- # uname 00:04:37.079 11:20:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.079 11:20:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.079 11:20:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.079 11:20:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.079 11:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.079 ************************************ 00:04:37.079 START TEST env_mem_callbacks 00:04:37.079 ************************************ 00:04:37.079 11:20:11 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.079 EAL: Detected CPU lcores: 112 00:04:37.079 EAL: Detected NUMA nodes: 2 00:04:37.079 EAL: Detected shared linkage of DPDK 00:04:37.079 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.079 EAL: Selected IOVA mode 'VA' 00:04:37.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.079 EAL: VFIO support initialized 00:04:37.079 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.079 00:04:37.079 00:04:37.079 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.079 http://cunit.sourceforge.net/ 00:04:37.079 00:04:37.079 00:04:37.079 Suite: memory 00:04:37.079 Test: test ... 00:04:37.079 register 0x200000200000 2097152 00:04:37.079 malloc 3145728 00:04:37.079 register 0x200000400000 4194304 00:04:37.079 buf 0x200000500000 len 3145728 PASSED 00:04:37.079 malloc 64 00:04:37.079 buf 0x2000004fff40 len 64 PASSED 00:04:37.079 malloc 4194304 00:04:37.079 register 0x200000800000 6291456 00:04:37.079 buf 0x200000a00000 len 4194304 PASSED 00:04:37.079 free 0x200000500000 3145728 00:04:37.079 free 0x2000004fff40 64 00:04:37.079 unregister 0x200000400000 4194304 PASSED 00:04:37.079 free 0x200000a00000 4194304 00:04:37.079 unregister 0x200000800000 6291456 PASSED 00:04:37.079 malloc 8388608 00:04:37.079 register 0x200000400000 10485760 00:04:37.079 buf 0x200000600000 len 8388608 PASSED 00:04:37.079 free 0x200000600000 8388608 00:04:37.079 unregister 0x200000400000 10485760 PASSED 00:04:37.079 passed 00:04:37.079 00:04:37.079 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.079 suites 1 1 n/a 0 0 00:04:37.079 tests 1 1 1 0 0 00:04:37.079 asserts 15 15 15 0 n/a 00:04:37.079 00:04:37.079 Elapsed time = 0.008 seconds 00:04:37.079 00:04:37.079 real 0m0.061s 00:04:37.079 user 0m0.020s 00:04:37.079 sys 0m0.041s 00:04:37.079 11:20:11 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.079 11:20:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.079 ************************************ 00:04:37.079 END TEST env_mem_callbacks 00:04:37.079 ************************************ 00:04:37.337 11:20:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.337 00:04:37.337 real 0m6.451s 00:04:37.337 user 0m4.481s 00:04:37.337 sys 0m1.022s 00:04:37.337 11:20:11 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.337 11:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.337 ************************************ 00:04:37.337 END TEST env 00:04:37.337 ************************************ 00:04:37.337 11:20:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.337 11:20:11 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.337 11:20:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.337 11:20:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.337 11:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.337 ************************************ 00:04:37.337 START TEST rpc 00:04:37.337 ************************************ 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.337 * Looking for test storage... 00:04:37.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.337 11:20:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2585703 00:04:37.337 11:20:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.337 11:20:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:37.337 11:20:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2585703 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 2585703 ']' 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.337 11:20:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.337 [2024-07-15 11:20:11.786605] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:37.337 [2024-07-15 11:20:11.786665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585703 ] 00:04:37.596 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.596 [2024-07-15 11:20:11.861490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.596 [2024-07-15 11:20:11.956346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.596 [2024-07-15 11:20:11.956388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2585703' to capture a snapshot of events at runtime. 00:04:37.596 [2024-07-15 11:20:11.956399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.596 [2024-07-15 11:20:11.956408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.596 [2024-07-15 11:20:11.956416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2585703 for offline analysis/debug. 00:04:37.596 [2024-07-15 11:20:11.956441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.854 11:20:12 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.854 11:20:12 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.854 11:20:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.854 11:20:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.854 11:20:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.854 11:20:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.854 11:20:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.854 11:20:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.854 11:20:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.854 ************************************ 00:04:37.854 START TEST rpc_integrity 00:04:37.854 ************************************ 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.854 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.854 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.855 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.855 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.855 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.855 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.855 { 00:04:37.855 "name": "Malloc0", 00:04:37.855 "aliases": [ 00:04:37.855 "bc9fbed5-c0bc-437c-a47d-f8307a4e0fe6" 00:04:37.855 ], 00:04:37.855 "product_name": "Malloc disk", 00:04:37.855 "block_size": 512, 00:04:37.855 "num_blocks": 16384, 00:04:37.855 "uuid": "bc9fbed5-c0bc-437c-a47d-f8307a4e0fe6", 00:04:37.855 "assigned_rate_limits": { 00:04:37.855 "rw_ios_per_sec": 0, 00:04:37.855 "rw_mbytes_per_sec": 0, 00:04:37.855 "r_mbytes_per_sec": 0, 00:04:37.855 "w_mbytes_per_sec": 0 00:04:37.855 }, 00:04:37.855 "claimed": false, 00:04:37.855 "zoned": false, 00:04:37.855 "supported_io_types": { 00:04:37.855 "read": true, 00:04:37.855 "write": true, 00:04:37.855 "unmap": true, 00:04:37.855 "flush": true, 00:04:37.855 "reset": true, 00:04:37.855 "nvme_admin": false, 00:04:37.855 "nvme_io": false, 00:04:37.855 "nvme_io_md": false, 00:04:37.855 "write_zeroes": true, 00:04:37.855 "zcopy": true, 00:04:37.855 "get_zone_info": false, 00:04:37.855 "zone_management": false, 00:04:37.855 "zone_append": false, 00:04:37.855 "compare": false, 00:04:37.855 "compare_and_write": false, 00:04:37.855 "abort": true, 00:04:37.855 "seek_hole": false, 00:04:37.855 "seek_data": false, 00:04:37.855 "copy": true, 00:04:37.855 "nvme_iov_md": false 00:04:37.855 }, 00:04:37.855 "memory_domains": [ 00:04:37.855 { 00:04:37.855 "dma_device_id": "system", 00:04:37.855 "dma_device_type": 1 00:04:37.855 }, 00:04:37.855 { 00:04:37.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.855 "dma_device_type": 2 00:04:37.855 } 00:04:37.855 ], 00:04:37.855 "driver_specific": {} 00:04:37.855 } 00:04:37.855 ]' 00:04:37.855 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 [2024-07-15 11:20:12.344265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.114 [2024-07-15 11:20:12.344304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.114 [2024-07-15 11:20:12.344320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1035c80 00:04:38.114 [2024-07-15 11:20:12.344330] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.114 [2024-07-15 11:20:12.345847] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.114 [2024-07-15 11:20:12.345874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.114 Passthru0 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.114 { 00:04:38.114 "name": "Malloc0", 00:04:38.114 "aliases": [ 00:04:38.114 "bc9fbed5-c0bc-437c-a47d-f8307a4e0fe6" 00:04:38.114 ], 00:04:38.114 "product_name": "Malloc disk", 00:04:38.114 "block_size": 512, 00:04:38.114 "num_blocks": 16384, 00:04:38.114 "uuid": "bc9fbed5-c0bc-437c-a47d-f8307a4e0fe6", 00:04:38.114 "assigned_rate_limits": { 00:04:38.114 "rw_ios_per_sec": 0, 00:04:38.114 "rw_mbytes_per_sec": 0, 00:04:38.114 "r_mbytes_per_sec": 0, 00:04:38.114 "w_mbytes_per_sec": 0 00:04:38.114 }, 00:04:38.114 "claimed": true, 00:04:38.114 "claim_type": "exclusive_write", 00:04:38.114 "zoned": false, 00:04:38.114 "supported_io_types": { 00:04:38.114 "read": true, 00:04:38.114 "write": true, 00:04:38.114 "unmap": true, 00:04:38.114 "flush": true, 00:04:38.114 "reset": true, 00:04:38.114 "nvme_admin": false, 00:04:38.114 "nvme_io": false, 00:04:38.114 "nvme_io_md": false, 00:04:38.114 "write_zeroes": true, 00:04:38.114 "zcopy": true, 00:04:38.114 "get_zone_info": false, 00:04:38.114 "zone_management": false, 00:04:38.114 "zone_append": false, 00:04:38.114 "compare": false, 00:04:38.114 "compare_and_write": false, 00:04:38.114 "abort": true, 00:04:38.114 "seek_hole": false, 00:04:38.114 "seek_data": false, 00:04:38.114 "copy": true, 00:04:38.114 "nvme_iov_md": false 00:04:38.114 }, 00:04:38.114 "memory_domains": [ 00:04:38.114 { 00:04:38.114 "dma_device_id": "system", 00:04:38.114 "dma_device_type": 1 00:04:38.114 }, 00:04:38.114 { 00:04:38.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.114 "dma_device_type": 2 00:04:38.114 } 00:04:38.114 ], 00:04:38.114 "driver_specific": {} 00:04:38.114 }, 00:04:38.114 { 00:04:38.114 "name": "Passthru0", 00:04:38.114 "aliases": [ 00:04:38.114 "7719dea8-8de5-520c-b681-c845b7f9cbf2" 00:04:38.114 ], 00:04:38.114 "product_name": "passthru", 00:04:38.114 "block_size": 512, 00:04:38.114 "num_blocks": 16384, 00:04:38.114 "uuid": "7719dea8-8de5-520c-b681-c845b7f9cbf2", 00:04:38.114 "assigned_rate_limits": { 00:04:38.114 "rw_ios_per_sec": 0, 00:04:38.114 "rw_mbytes_per_sec": 0, 00:04:38.114 "r_mbytes_per_sec": 0, 00:04:38.114 "w_mbytes_per_sec": 0 00:04:38.114 }, 00:04:38.114 "claimed": false, 00:04:38.114 "zoned": false, 00:04:38.114 "supported_io_types": { 00:04:38.114 "read": true, 00:04:38.114 "write": true, 00:04:38.114 "unmap": true, 00:04:38.114 "flush": true, 00:04:38.114 "reset": true, 00:04:38.114 "nvme_admin": false, 00:04:38.114 "nvme_io": false, 00:04:38.114 "nvme_io_md": false, 00:04:38.114 "write_zeroes": true, 00:04:38.114 "zcopy": true, 00:04:38.114 "get_zone_info": false, 00:04:38.114 "zone_management": false, 00:04:38.114 "zone_append": false, 00:04:38.114 "compare": false, 00:04:38.114 "compare_and_write": false, 00:04:38.114 "abort": true, 00:04:38.114 "seek_hole": false, 00:04:38.114 "seek_data": false, 00:04:38.114 "copy": true, 00:04:38.114 "nvme_iov_md": false 00:04:38.114 }, 00:04:38.114 "memory_domains": [ 00:04:38.114 { 00:04:38.114 "dma_device_id": "system", 00:04:38.114 "dma_device_type": 1 00:04:38.114 }, 00:04:38.114 { 00:04:38.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.114 "dma_device_type": 2 00:04:38.114 } 00:04:38.114 ], 00:04:38.114 "driver_specific": { 00:04:38.114 "passthru": { 00:04:38.114 "name": "Passthru0", 00:04:38.114 "base_bdev_name": "Malloc0" 00:04:38.114 } 00:04:38.114 } 00:04:38.114 } 00:04:38.114 ]' 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.114 11:20:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.114 00:04:38.114 real 0m0.291s 00:04:38.114 user 0m0.187s 00:04:38.114 sys 0m0.041s 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 ************************************ 00:04:38.114 END TEST rpc_integrity 00:04:38.114 ************************************ 00:04:38.114 11:20:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.114 11:20:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.114 11:20:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.114 11:20:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.114 11:20:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 ************************************ 00:04:38.114 START TEST rpc_plugins 00:04:38.114 ************************************ 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:38.114 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.114 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.114 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.114 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.373 { 00:04:38.373 "name": "Malloc1", 00:04:38.373 "aliases": [ 00:04:38.373 "56fea4a1-c1a4-4814-8863-797d7850ccf4" 00:04:38.373 ], 00:04:38.373 "product_name": "Malloc disk", 00:04:38.373 "block_size": 4096, 00:04:38.373 "num_blocks": 256, 00:04:38.373 "uuid": "56fea4a1-c1a4-4814-8863-797d7850ccf4", 00:04:38.373 "assigned_rate_limits": { 00:04:38.373 "rw_ios_per_sec": 0, 00:04:38.373 "rw_mbytes_per_sec": 0, 00:04:38.373 "r_mbytes_per_sec": 0, 00:04:38.373 "w_mbytes_per_sec": 0 00:04:38.373 }, 00:04:38.373 "claimed": false, 00:04:38.373 "zoned": false, 00:04:38.373 "supported_io_types": { 00:04:38.373 "read": true, 00:04:38.373 "write": true, 00:04:38.373 "unmap": true, 00:04:38.373 "flush": true, 00:04:38.373 "reset": true, 00:04:38.373 "nvme_admin": false, 00:04:38.373 "nvme_io": false, 00:04:38.373 "nvme_io_md": false, 00:04:38.373 "write_zeroes": true, 00:04:38.373 "zcopy": true, 00:04:38.373 "get_zone_info": false, 00:04:38.373 "zone_management": false, 00:04:38.373 "zone_append": false, 00:04:38.373 "compare": false, 00:04:38.373 "compare_and_write": false, 00:04:38.373 "abort": true, 00:04:38.373 "seek_hole": false, 00:04:38.373 "seek_data": false, 00:04:38.373 "copy": true, 00:04:38.373 "nvme_iov_md": false 00:04:38.373 }, 00:04:38.373 "memory_domains": [ 00:04:38.373 { 00:04:38.373 "dma_device_id": "system", 00:04:38.373 "dma_device_type": 1 00:04:38.373 }, 00:04:38.373 { 00:04:38.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.373 "dma_device_type": 2 00:04:38.373 } 00:04:38.373 ], 00:04:38.373 "driver_specific": {} 00:04:38.373 } 00:04:38.373 ]' 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.373 11:20:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.373 00:04:38.373 real 0m0.137s 00:04:38.373 user 0m0.084s 00:04:38.373 sys 0m0.024s 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.373 11:20:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 ************************************ 00:04:38.373 END TEST rpc_plugins 00:04:38.373 ************************************ 00:04:38.373 11:20:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.373 11:20:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.373 11:20:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.373 11:20:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.373 11:20:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 ************************************ 00:04:38.373 START TEST rpc_trace_cmd_test 00:04:38.373 ************************************ 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.373 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.373 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2585703", 00:04:38.373 "tpoint_group_mask": "0x8", 00:04:38.373 "iscsi_conn": { 00:04:38.373 "mask": "0x2", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "scsi": { 00:04:38.373 "mask": "0x4", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "bdev": { 00:04:38.373 "mask": "0x8", 00:04:38.373 "tpoint_mask": "0xffffffffffffffff" 00:04:38.373 }, 00:04:38.373 "nvmf_rdma": { 00:04:38.373 "mask": "0x10", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "nvmf_tcp": { 00:04:38.373 "mask": "0x20", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "ftl": { 00:04:38.373 "mask": "0x40", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "blobfs": { 00:04:38.373 "mask": "0x80", 00:04:38.373 "tpoint_mask": "0x0" 00:04:38.373 }, 00:04:38.373 "dsa": { 00:04:38.374 "mask": "0x200", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "thread": { 00:04:38.374 "mask": "0x400", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "nvme_pcie": { 00:04:38.374 "mask": "0x800", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "iaa": { 00:04:38.374 "mask": "0x1000", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "nvme_tcp": { 00:04:38.374 "mask": "0x2000", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "bdev_nvme": { 00:04:38.374 "mask": "0x4000", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 }, 00:04:38.374 "sock": { 00:04:38.374 "mask": "0x8000", 00:04:38.374 "tpoint_mask": "0x0" 00:04:38.374 } 00:04:38.374 }' 00:04:38.374 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.374 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:38.374 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.631 00:04:38.631 real 0m0.227s 00:04:38.631 user 0m0.191s 00:04:38.631 sys 0m0.029s 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.631 11:20:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.631 ************************************ 00:04:38.631 END TEST rpc_trace_cmd_test 00:04:38.631 ************************************ 00:04:38.631 11:20:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.631 11:20:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.631 11:20:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.631 11:20:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.631 11:20:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.631 11:20:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.631 11:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.631 ************************************ 00:04:38.631 START TEST rpc_daemon_integrity 00:04:38.631 ************************************ 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.631 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.889 { 00:04:38.889 "name": "Malloc2", 00:04:38.889 "aliases": [ 00:04:38.889 "1d5cd5dc-3b48-4f5d-a344-5b49809c37b3" 00:04:38.889 ], 00:04:38.889 "product_name": "Malloc disk", 00:04:38.889 "block_size": 512, 00:04:38.889 "num_blocks": 16384, 00:04:38.889 "uuid": "1d5cd5dc-3b48-4f5d-a344-5b49809c37b3", 00:04:38.889 "assigned_rate_limits": { 00:04:38.889 "rw_ios_per_sec": 0, 00:04:38.889 "rw_mbytes_per_sec": 0, 00:04:38.889 "r_mbytes_per_sec": 0, 00:04:38.889 "w_mbytes_per_sec": 0 00:04:38.889 }, 00:04:38.889 "claimed": false, 00:04:38.889 "zoned": false, 00:04:38.889 "supported_io_types": { 00:04:38.889 "read": true, 00:04:38.889 "write": true, 00:04:38.889 "unmap": true, 00:04:38.889 "flush": true, 00:04:38.889 "reset": true, 00:04:38.889 "nvme_admin": false, 00:04:38.889 "nvme_io": false, 00:04:38.889 "nvme_io_md": false, 00:04:38.889 "write_zeroes": true, 00:04:38.889 "zcopy": true, 00:04:38.889 "get_zone_info": false, 00:04:38.889 "zone_management": false, 00:04:38.889 "zone_append": false, 00:04:38.889 "compare": false, 00:04:38.889 "compare_and_write": false, 00:04:38.889 "abort": true, 00:04:38.889 "seek_hole": false, 00:04:38.889 "seek_data": false, 00:04:38.889 "copy": true, 00:04:38.889 "nvme_iov_md": false 00:04:38.889 }, 00:04:38.889 "memory_domains": [ 00:04:38.889 { 00:04:38.889 "dma_device_id": "system", 00:04:38.889 "dma_device_type": 1 00:04:38.889 }, 00:04:38.889 { 00:04:38.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.889 "dma_device_type": 2 00:04:38.889 } 00:04:38.889 ], 00:04:38.889 "driver_specific": {} 00:04:38.889 } 00:04:38.889 ]' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 [2024-07-15 11:20:13.198699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.889 [2024-07-15 11:20:13.198735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.889 [2024-07-15 11:20:13.198754] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10371c0 00:04:38.889 [2024-07-15 11:20:13.198763] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.889 [2024-07-15 11:20:13.200144] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.889 [2024-07-15 11:20:13.200170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.889 Passthru0 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.889 { 00:04:38.889 "name": "Malloc2", 00:04:38.889 "aliases": [ 00:04:38.889 "1d5cd5dc-3b48-4f5d-a344-5b49809c37b3" 00:04:38.889 ], 00:04:38.889 "product_name": "Malloc disk", 00:04:38.889 "block_size": 512, 00:04:38.889 "num_blocks": 16384, 00:04:38.889 "uuid": "1d5cd5dc-3b48-4f5d-a344-5b49809c37b3", 00:04:38.889 "assigned_rate_limits": { 00:04:38.889 "rw_ios_per_sec": 0, 00:04:38.889 "rw_mbytes_per_sec": 0, 00:04:38.889 "r_mbytes_per_sec": 0, 00:04:38.889 "w_mbytes_per_sec": 0 00:04:38.889 }, 00:04:38.889 "claimed": true, 00:04:38.889 "claim_type": "exclusive_write", 00:04:38.889 "zoned": false, 00:04:38.889 "supported_io_types": { 00:04:38.889 "read": true, 00:04:38.889 "write": true, 00:04:38.889 "unmap": true, 00:04:38.889 "flush": true, 00:04:38.889 "reset": true, 00:04:38.889 "nvme_admin": false, 00:04:38.889 "nvme_io": false, 00:04:38.889 "nvme_io_md": false, 00:04:38.889 "write_zeroes": true, 00:04:38.889 "zcopy": true, 00:04:38.889 "get_zone_info": false, 00:04:38.889 "zone_management": false, 00:04:38.889 "zone_append": false, 00:04:38.889 "compare": false, 00:04:38.889 "compare_and_write": false, 00:04:38.889 "abort": true, 00:04:38.889 "seek_hole": false, 00:04:38.889 "seek_data": false, 00:04:38.889 "copy": true, 00:04:38.889 "nvme_iov_md": false 00:04:38.889 }, 00:04:38.889 "memory_domains": [ 00:04:38.889 { 00:04:38.889 "dma_device_id": "system", 00:04:38.889 "dma_device_type": 1 00:04:38.889 }, 00:04:38.889 { 00:04:38.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.889 "dma_device_type": 2 00:04:38.889 } 00:04:38.889 ], 00:04:38.889 "driver_specific": {} 00:04:38.889 }, 00:04:38.889 { 00:04:38.889 "name": "Passthru0", 00:04:38.889 "aliases": [ 00:04:38.889 "111817ae-1be9-5c10-9ccd-01b2250cb418" 00:04:38.889 ], 00:04:38.889 "product_name": "passthru", 00:04:38.889 "block_size": 512, 00:04:38.889 "num_blocks": 16384, 00:04:38.889 "uuid": "111817ae-1be9-5c10-9ccd-01b2250cb418", 00:04:38.889 "assigned_rate_limits": { 00:04:38.889 "rw_ios_per_sec": 0, 00:04:38.889 "rw_mbytes_per_sec": 0, 00:04:38.889 "r_mbytes_per_sec": 0, 00:04:38.889 "w_mbytes_per_sec": 0 00:04:38.889 }, 00:04:38.889 "claimed": false, 00:04:38.889 "zoned": false, 00:04:38.889 "supported_io_types": { 00:04:38.889 "read": true, 00:04:38.889 "write": true, 00:04:38.889 "unmap": true, 00:04:38.889 "flush": true, 00:04:38.889 "reset": true, 00:04:38.889 "nvme_admin": false, 00:04:38.889 "nvme_io": false, 00:04:38.889 "nvme_io_md": false, 00:04:38.889 "write_zeroes": true, 00:04:38.889 "zcopy": true, 00:04:38.889 "get_zone_info": false, 00:04:38.889 "zone_management": false, 00:04:38.889 "zone_append": false, 00:04:38.889 "compare": false, 00:04:38.889 "compare_and_write": false, 00:04:38.889 "abort": true, 00:04:38.889 "seek_hole": false, 00:04:38.889 "seek_data": false, 00:04:38.889 "copy": true, 00:04:38.889 "nvme_iov_md": false 00:04:38.889 }, 00:04:38.889 "memory_domains": [ 00:04:38.889 { 00:04:38.889 "dma_device_id": "system", 00:04:38.889 "dma_device_type": 1 00:04:38.889 }, 00:04:38.889 { 00:04:38.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.889 "dma_device_type": 2 00:04:38.889 } 00:04:38.889 ], 00:04:38.889 "driver_specific": { 00:04:38.889 "passthru": { 00:04:38.889 "name": "Passthru0", 00:04:38.889 "base_bdev_name": "Malloc2" 00:04:38.889 } 00:04:38.889 } 00:04:38.889 } 00:04:38.889 ]' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.889 00:04:38.889 real 0m0.276s 00:04:38.889 user 0m0.193s 00:04:38.889 sys 0m0.030s 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.889 11:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.889 ************************************ 00:04:38.889 END TEST rpc_daemon_integrity 00:04:38.889 ************************************ 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.146 11:20:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.146 11:20:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2585703 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 2585703 ']' 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@952 -- # kill -0 2585703 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@953 -- # uname 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2585703 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2585703' 00:04:39.146 killing process with pid 2585703 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@967 -- # kill 2585703 00:04:39.146 11:20:13 rpc -- common/autotest_common.sh@972 -- # wait 2585703 00:04:39.404 00:04:39.404 real 0m2.114s 00:04:39.404 user 0m2.808s 00:04:39.404 sys 0m0.691s 00:04:39.404 11:20:13 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.404 11:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.404 ************************************ 00:04:39.404 END TEST rpc 00:04:39.404 ************************************ 00:04:39.404 11:20:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.404 11:20:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.404 11:20:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.404 11:20:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.404 11:20:13 -- common/autotest_common.sh@10 -- # set +x 00:04:39.404 ************************************ 00:04:39.404 START TEST skip_rpc 00:04:39.404 ************************************ 00:04:39.404 11:20:13 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.662 * Looking for test storage... 00:04:39.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.662 11:20:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.662 11:20:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.662 11:20:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.662 11:20:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.662 11:20:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.662 11:20:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.662 ************************************ 00:04:39.662 START TEST skip_rpc 00:04:39.662 ************************************ 00:04:39.662 11:20:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:39.662 11:20:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2586268 00:04:39.662 11:20:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.662 11:20:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.662 11:20:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.662 [2024-07-15 11:20:14.009404] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:39.662 [2024-07-15 11:20:14.009462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586268 ] 00:04:39.662 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.662 [2024-07-15 11:20:14.089598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.920 [2024-07-15 11:20:14.179134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2586268 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2586268 ']' 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2586268 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.194 11:20:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2586268 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2586268' 00:04:45.194 killing process with pid 2586268 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2586268 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2586268 00:04:45.194 00:04:45.194 real 0m5.398s 00:04:45.194 user 0m5.137s 00:04:45.194 sys 0m0.293s 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.194 11:20:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.194 ************************************ 00:04:45.194 END TEST skip_rpc 00:04:45.194 ************************************ 00:04:45.194 11:20:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.194 11:20:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:45.194 11:20:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.194 11:20:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.194 11:20:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.194 ************************************ 00:04:45.194 START TEST skip_rpc_with_json 00:04:45.194 ************************************ 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2587226 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2587226 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2587226 ']' 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.194 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.194 [2024-07-15 11:20:19.476070] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:45.194 [2024-07-15 11:20:19.476121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587226 ] 00:04:45.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.195 [2024-07-15 11:20:19.558132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.195 [2024-07-15 11:20:19.649062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.132 [2024-07-15 11:20:20.413190] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:46.132 request: 00:04:46.132 { 00:04:46.132 "trtype": "tcp", 00:04:46.132 "method": "nvmf_get_transports", 00:04:46.132 "req_id": 1 00:04:46.132 } 00:04:46.132 Got JSON-RPC error response 00:04:46.132 response: 00:04:46.132 { 00:04:46.132 "code": -19, 00:04:46.132 "message": "No such device" 00:04:46.132 } 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.132 [2024-07-15 11:20:20.425334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.132 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.132 { 00:04:46.132 "subsystems": [ 00:04:46.132 { 00:04:46.132 "subsystem": "vfio_user_target", 00:04:46.132 "config": null 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "keyring", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "iobuf", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "iobuf_set_options", 00:04:46.132 "params": { 00:04:46.132 "small_pool_count": 8192, 00:04:46.132 "large_pool_count": 1024, 00:04:46.132 "small_bufsize": 8192, 00:04:46.132 "large_bufsize": 135168 00:04:46.132 } 00:04:46.132 } 00:04:46.132 ] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "sock", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "sock_set_default_impl", 00:04:46.132 "params": { 00:04:46.132 "impl_name": "posix" 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "sock_impl_set_options", 00:04:46.132 "params": { 00:04:46.132 "impl_name": "ssl", 00:04:46.132 "recv_buf_size": 4096, 00:04:46.132 "send_buf_size": 4096, 00:04:46.132 "enable_recv_pipe": true, 00:04:46.132 "enable_quickack": false, 00:04:46.132 "enable_placement_id": 0, 00:04:46.132 "enable_zerocopy_send_server": true, 00:04:46.132 "enable_zerocopy_send_client": false, 00:04:46.132 "zerocopy_threshold": 0, 00:04:46.132 "tls_version": 0, 00:04:46.132 "enable_ktls": false 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "sock_impl_set_options", 00:04:46.132 "params": { 00:04:46.132 "impl_name": "posix", 00:04:46.132 "recv_buf_size": 2097152, 00:04:46.132 "send_buf_size": 2097152, 00:04:46.132 "enable_recv_pipe": true, 00:04:46.132 "enable_quickack": false, 00:04:46.132 "enable_placement_id": 0, 00:04:46.132 "enable_zerocopy_send_server": true, 00:04:46.132 "enable_zerocopy_send_client": false, 00:04:46.132 "zerocopy_threshold": 0, 00:04:46.132 "tls_version": 0, 00:04:46.132 "enable_ktls": false 00:04:46.132 } 00:04:46.132 } 00:04:46.132 ] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "vmd", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "accel", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "accel_set_options", 00:04:46.132 "params": { 00:04:46.132 "small_cache_size": 128, 00:04:46.132 "large_cache_size": 16, 00:04:46.132 "task_count": 2048, 00:04:46.132 "sequence_count": 2048, 00:04:46.132 "buf_count": 2048 00:04:46.132 } 00:04:46.132 } 00:04:46.132 ] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "bdev", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "bdev_set_options", 00:04:46.132 "params": { 00:04:46.132 "bdev_io_pool_size": 65535, 00:04:46.132 "bdev_io_cache_size": 256, 00:04:46.132 "bdev_auto_examine": true, 00:04:46.132 "iobuf_small_cache_size": 128, 00:04:46.132 "iobuf_large_cache_size": 16 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "bdev_raid_set_options", 00:04:46.132 "params": { 00:04:46.132 "process_window_size_kb": 1024 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "bdev_iscsi_set_options", 00:04:46.132 "params": { 00:04:46.132 "timeout_sec": 30 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "bdev_nvme_set_options", 00:04:46.132 "params": { 00:04:46.132 "action_on_timeout": "none", 00:04:46.132 "timeout_us": 0, 00:04:46.132 "timeout_admin_us": 0, 00:04:46.132 "keep_alive_timeout_ms": 10000, 00:04:46.132 "arbitration_burst": 0, 00:04:46.132 "low_priority_weight": 0, 00:04:46.132 "medium_priority_weight": 0, 00:04:46.132 "high_priority_weight": 0, 00:04:46.132 "nvme_adminq_poll_period_us": 10000, 00:04:46.132 "nvme_ioq_poll_period_us": 0, 00:04:46.132 "io_queue_requests": 0, 00:04:46.132 "delay_cmd_submit": true, 00:04:46.132 "transport_retry_count": 4, 00:04:46.132 "bdev_retry_count": 3, 00:04:46.132 "transport_ack_timeout": 0, 00:04:46.132 "ctrlr_loss_timeout_sec": 0, 00:04:46.132 "reconnect_delay_sec": 0, 00:04:46.132 "fast_io_fail_timeout_sec": 0, 00:04:46.132 "disable_auto_failback": false, 00:04:46.132 "generate_uuids": false, 00:04:46.132 "transport_tos": 0, 00:04:46.132 "nvme_error_stat": false, 00:04:46.132 "rdma_srq_size": 0, 00:04:46.132 "io_path_stat": false, 00:04:46.132 "allow_accel_sequence": false, 00:04:46.132 "rdma_max_cq_size": 0, 00:04:46.132 "rdma_cm_event_timeout_ms": 0, 00:04:46.132 "dhchap_digests": [ 00:04:46.132 "sha256", 00:04:46.132 "sha384", 00:04:46.132 "sha512" 00:04:46.132 ], 00:04:46.132 "dhchap_dhgroups": [ 00:04:46.132 "null", 00:04:46.132 "ffdhe2048", 00:04:46.132 "ffdhe3072", 00:04:46.132 "ffdhe4096", 00:04:46.132 "ffdhe6144", 00:04:46.132 "ffdhe8192" 00:04:46.132 ] 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "bdev_nvme_set_hotplug", 00:04:46.132 "params": { 00:04:46.132 "period_us": 100000, 00:04:46.132 "enable": false 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "bdev_wait_for_examine" 00:04:46.132 } 00:04:46.132 ] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "scsi", 00:04:46.132 "config": null 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "scheduler", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "framework_set_scheduler", 00:04:46.132 "params": { 00:04:46.132 "name": "static" 00:04:46.132 } 00:04:46.132 } 00:04:46.132 ] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "vhost_scsi", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "vhost_blk", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "ublk", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "nbd", 00:04:46.132 "config": [] 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "subsystem": "nvmf", 00:04:46.132 "config": [ 00:04:46.132 { 00:04:46.132 "method": "nvmf_set_config", 00:04:46.132 "params": { 00:04:46.132 "discovery_filter": "match_any", 00:04:46.132 "admin_cmd_passthru": { 00:04:46.132 "identify_ctrlr": false 00:04:46.132 } 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "nvmf_set_max_subsystems", 00:04:46.132 "params": { 00:04:46.132 "max_subsystems": 1024 00:04:46.132 } 00:04:46.132 }, 00:04:46.132 { 00:04:46.132 "method": "nvmf_set_crdt", 00:04:46.132 "params": { 00:04:46.132 "crdt1": 0, 00:04:46.133 "crdt2": 0, 00:04:46.133 "crdt3": 0 00:04:46.133 } 00:04:46.133 }, 00:04:46.133 { 00:04:46.133 "method": "nvmf_create_transport", 00:04:46.133 "params": { 00:04:46.133 "trtype": "TCP", 00:04:46.133 "max_queue_depth": 128, 00:04:46.133 "max_io_qpairs_per_ctrlr": 127, 00:04:46.133 "in_capsule_data_size": 4096, 00:04:46.133 "max_io_size": 131072, 00:04:46.133 "io_unit_size": 131072, 00:04:46.133 "max_aq_depth": 128, 00:04:46.133 "num_shared_buffers": 511, 00:04:46.133 "buf_cache_size": 4294967295, 00:04:46.133 "dif_insert_or_strip": false, 00:04:46.133 "zcopy": false, 00:04:46.133 "c2h_success": true, 00:04:46.133 "sock_priority": 0, 00:04:46.133 "abort_timeout_sec": 1, 00:04:46.133 "ack_timeout": 0, 00:04:46.133 "data_wr_pool_size": 0 00:04:46.133 } 00:04:46.133 } 00:04:46.133 ] 00:04:46.133 }, 00:04:46.133 { 00:04:46.133 "subsystem": "iscsi", 00:04:46.133 "config": [ 00:04:46.133 { 00:04:46.133 "method": "iscsi_set_options", 00:04:46.133 "params": { 00:04:46.133 "node_base": "iqn.2016-06.io.spdk", 00:04:46.133 "max_sessions": 128, 00:04:46.133 "max_connections_per_session": 2, 00:04:46.133 "max_queue_depth": 64, 00:04:46.133 "default_time2wait": 2, 00:04:46.133 "default_time2retain": 20, 00:04:46.133 "first_burst_length": 8192, 00:04:46.133 "immediate_data": true, 00:04:46.133 "allow_duplicated_isid": false, 00:04:46.133 "error_recovery_level": 0, 00:04:46.133 "nop_timeout": 60, 00:04:46.133 "nop_in_interval": 30, 00:04:46.133 "disable_chap": false, 00:04:46.133 "require_chap": false, 00:04:46.133 "mutual_chap": false, 00:04:46.133 "chap_group": 0, 00:04:46.133 "max_large_datain_per_connection": 64, 00:04:46.133 "max_r2t_per_connection": 4, 00:04:46.133 "pdu_pool_size": 36864, 00:04:46.133 "immediate_data_pool_size": 16384, 00:04:46.133 "data_out_pool_size": 2048 00:04:46.133 } 00:04:46.133 } 00:04:46.133 ] 00:04:46.133 } 00:04:46.133 ] 00:04:46.133 } 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2587226 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2587226 ']' 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2587226 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.133 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587226 00:04:46.392 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.392 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.392 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587226' 00:04:46.392 killing process with pid 2587226 00:04:46.392 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2587226 00:04:46.392 11:20:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2587226 00:04:46.651 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2587505 00:04:46.651 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.651 11:20:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2587505 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2587505 ']' 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2587505 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.923 11:20:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587505 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587505' 00:04:51.923 killing process with pid 2587505 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2587505 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2587505 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.923 00:04:51.923 real 0m6.944s 00:04:51.923 user 0m6.843s 00:04:51.923 sys 0m0.663s 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.923 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.923 ************************************ 00:04:51.923 END TEST skip_rpc_with_json 00:04:51.923 ************************************ 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.183 11:20:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.183 ************************************ 00:04:52.183 START TEST skip_rpc_with_delay 00:04:52.183 ************************************ 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.183 [2024-07-15 11:20:26.490809] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.183 [2024-07-15 11:20:26.490888] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.183 00:04:52.183 real 0m0.076s 00:04:52.183 user 0m0.047s 00:04:52.183 sys 0m0.028s 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.183 11:20:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.183 ************************************ 00:04:52.183 END TEST skip_rpc_with_delay 00:04:52.183 ************************************ 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.183 11:20:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.183 11:20:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.183 11:20:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.183 11:20:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.183 ************************************ 00:04:52.183 START TEST exit_on_failed_rpc_init 00:04:52.183 ************************************ 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2588597 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2588597 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2588597 ']' 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.183 11:20:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.443 [2024-07-15 11:20:26.675578] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:52.443 [2024-07-15 11:20:26.675685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588597 ] 00:04:52.443 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.443 [2024-07-15 11:20:26.797384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.443 [2024-07-15 11:20:26.899446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.702 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.961 [2024-07-15 11:20:27.190069] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:52.961 [2024-07-15 11:20:27.190129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588845 ] 00:04:52.961 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.961 [2024-07-15 11:20:27.269827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.961 [2024-07-15 11:20:27.370273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.961 [2024-07-15 11:20:27.370362] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.961 [2024-07-15 11:20:27.370378] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.961 [2024-07-15 11:20:27.370389] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2588597 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2588597 ']' 00:04:53.220 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2588597 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2588597 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2588597' 00:04:53.221 killing process with pid 2588597 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2588597 00:04:53.221 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2588597 00:04:53.480 00:04:53.480 real 0m1.275s 00:04:53.480 user 0m1.638s 00:04:53.480 sys 0m0.504s 00:04:53.480 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.480 11:20:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 END TEST exit_on_failed_rpc_init 00:04:53.480 ************************************ 00:04:53.480 11:20:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.480 11:20:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.480 00:04:53.480 real 0m14.069s 00:04:53.480 user 0m13.822s 00:04:53.480 sys 0m1.735s 00:04:53.480 11:20:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.480 11:20:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 END TEST skip_rpc 00:04:53.480 ************************************ 00:04:53.480 11:20:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.480 11:20:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.480 11:20:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.480 11:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.480 11:20:27 -- common/autotest_common.sh@10 -- # set +x 00:04:53.740 ************************************ 00:04:53.740 START TEST rpc_client 00:04:53.740 ************************************ 00:04:53.740 11:20:27 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.740 * Looking for test storage... 00:04:53.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.740 11:20:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.740 OK 00:04:53.740 11:20:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.740 00:04:53.740 real 0m0.109s 00:04:53.740 user 0m0.042s 00:04:53.740 sys 0m0.075s 00:04:53.740 11:20:28 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.740 11:20:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.740 ************************************ 00:04:53.740 END TEST rpc_client 00:04:53.740 ************************************ 00:04:53.740 11:20:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.740 11:20:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.740 11:20:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.740 11:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.740 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.740 ************************************ 00:04:53.740 START TEST json_config 00:04:53.740 ************************************ 00:04:53.740 11:20:28 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.740 11:20:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.740 11:20:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.999 11:20:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.999 11:20:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.999 11:20:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.999 11:20:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.999 11:20:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.999 11:20:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.999 11:20:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.999 11:20:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.999 11:20:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.000 11:20:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.000 11:20:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:54.000 11:20:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:54.000 11:20:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:54.000 INFO: JSON configuration test init 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.000 11:20:28 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:54.000 11:20:28 json_config -- json_config/common.sh@9 -- # local app=target 00:04:54.000 11:20:28 json_config -- json_config/common.sh@10 -- # shift 00:04:54.000 11:20:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.000 11:20:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.000 11:20:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.000 11:20:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.000 11:20:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.000 11:20:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2589001 00:04:54.000 11:20:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.000 Waiting for target to run... 00:04:54.000 11:20:28 json_config -- json_config/common.sh@25 -- # waitforlisten 2589001 /var/tmp/spdk_tgt.sock 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 2589001 ']' 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.000 11:20:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.000 11:20:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.000 [2024-07-15 11:20:28.324697] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:04:54.000 [2024-07-15 11:20:28.324811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589001 ] 00:04:54.000 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.259 [2024-07-15 11:20:28.713584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.517 [2024-07-15 11:20:28.793212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.084 11:20:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.084 11:20:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:55.084 11:20:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.084 00:04:55.084 11:20:29 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:55.085 11:20:29 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:55.085 11:20:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.085 11:20:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.085 11:20:29 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:55.085 11:20:29 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:55.085 11:20:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.085 11:20:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.085 11:20:29 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:55.085 11:20:29 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:55.085 11:20:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.374 11:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.374 11:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:58.374 11:20:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:58.374 11:20:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:58.634 11:20:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.634 11:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:58.634 11:20:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.634 11:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:58.634 11:20:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.634 11:20:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.895 MallocForNvmf0 00:04:58.895 11:20:33 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:58.895 11:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.153 MallocForNvmf1 00:04:59.153 11:20:33 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.153 11:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.411 [2024-07-15 11:20:33.683136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.411 11:20:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.411 11:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.669 11:20:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.669 11:20:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.927 11:20:34 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.927 11:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.927 11:20:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.927 11:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.186 [2024-07-15 11:20:34.602098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.186 11:20:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:00.186 11:20:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.186 11:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.445 11:20:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:00.445 11:20:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.445 11:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.445 11:20:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:00.445 11:20:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.445 11:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.445 MallocBdevForConfigChangeCheck 00:05:00.445 11:20:34 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:00.445 11:20:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.445 11:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.445 11:20:34 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:00.445 11:20:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.012 11:20:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:01.012 INFO: shutting down applications... 00:05:01.012 11:20:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:01.012 11:20:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:01.012 11:20:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:01.012 11:20:35 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.917 Calling clear_iscsi_subsystem 00:05:02.917 Calling clear_nvmf_subsystem 00:05:02.917 Calling clear_nbd_subsystem 00:05:02.917 Calling clear_ublk_subsystem 00:05:02.917 Calling clear_vhost_blk_subsystem 00:05:02.917 Calling clear_vhost_scsi_subsystem 00:05:02.917 Calling clear_bdev_subsystem 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.917 11:20:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:02.917 11:20:37 json_config -- json_config/json_config.sh@345 -- # break 00:05:02.917 11:20:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:02.917 11:20:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:02.917 11:20:37 json_config -- json_config/common.sh@31 -- # local app=target 00:05:02.917 11:20:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.917 11:20:37 json_config -- json_config/common.sh@35 -- # [[ -n 2589001 ]] 00:05:02.917 11:20:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2589001 00:05:02.917 11:20:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.917 11:20:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.917 11:20:37 json_config -- json_config/common.sh@41 -- # kill -0 2589001 00:05:02.917 11:20:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.484 11:20:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.485 11:20:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.485 11:20:37 json_config -- json_config/common.sh@41 -- # kill -0 2589001 00:05:03.485 11:20:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.485 11:20:37 json_config -- json_config/common.sh@43 -- # break 00:05:03.485 11:20:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.485 11:20:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.485 SPDK target shutdown done 00:05:03.485 11:20:37 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:03.485 INFO: relaunching applications... 00:05:03.485 11:20:37 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.485 11:20:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.485 11:20:37 json_config -- json_config/common.sh@10 -- # shift 00:05:03.485 11:20:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.485 11:20:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.485 11:20:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.485 11:20:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.485 11:20:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.485 11:20:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2590935 00:05:03.485 11:20:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.485 Waiting for target to run... 00:05:03.485 11:20:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.485 11:20:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2590935 /var/tmp/spdk_tgt.sock 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 2590935 ']' 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.485 11:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.485 [2024-07-15 11:20:37.872070] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:03.485 [2024-07-15 11:20:37.872132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590935 ] 00:05:03.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.743 [2024-07-15 11:20:38.176474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.002 [2024-07-15 11:20:38.256238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.284 [2024-07-15 11:20:41.303687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.284 [2024-07-15 11:20:41.336035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.284 11:20:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.284 11:20:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:07.284 11:20:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.284 00:05:07.284 11:20:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:07.284 11:20:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:07.284 INFO: Checking if target configuration is the same... 00:05:07.284 11:20:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.284 11:20:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:07.284 11:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.284 + '[' 2 -ne 2 ']' 00:05:07.284 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.284 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.284 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.284 +++ basename /dev/fd/62 00:05:07.284 ++ mktemp /tmp/62.XXX 00:05:07.284 + tmp_file_1=/tmp/62.gpN 00:05:07.284 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.284 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.284 + tmp_file_2=/tmp/spdk_tgt_config.json.lsW 00:05:07.284 + ret=0 00:05:07.284 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.543 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.543 + diff -u /tmp/62.gpN /tmp/spdk_tgt_config.json.lsW 00:05:07.543 + echo 'INFO: JSON config files are the same' 00:05:07.543 INFO: JSON config files are the same 00:05:07.543 + rm /tmp/62.gpN /tmp/spdk_tgt_config.json.lsW 00:05:07.543 + exit 0 00:05:07.543 11:20:41 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:07.543 11:20:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:07.543 INFO: changing configuration and checking if this can be detected... 00:05:07.543 11:20:41 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.543 11:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.802 11:20:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.802 11:20:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:07.802 11:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.802 + '[' 2 -ne 2 ']' 00:05:07.802 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.802 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.802 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.802 +++ basename /dev/fd/62 00:05:07.802 ++ mktemp /tmp/62.XXX 00:05:07.802 + tmp_file_1=/tmp/62.6v8 00:05:07.802 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.802 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.802 + tmp_file_2=/tmp/spdk_tgt_config.json.xeY 00:05:07.802 + ret=0 00:05:07.802 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.061 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.061 + diff -u /tmp/62.6v8 /tmp/spdk_tgt_config.json.xeY 00:05:08.061 + ret=1 00:05:08.061 + echo '=== Start of file: /tmp/62.6v8 ===' 00:05:08.061 + cat /tmp/62.6v8 00:05:08.061 + echo '=== End of file: /tmp/62.6v8 ===' 00:05:08.061 + echo '' 00:05:08.061 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xeY ===' 00:05:08.061 + cat /tmp/spdk_tgt_config.json.xeY 00:05:08.061 + echo '=== End of file: /tmp/spdk_tgt_config.json.xeY ===' 00:05:08.061 + echo '' 00:05:08.061 + rm /tmp/62.6v8 /tmp/spdk_tgt_config.json.xeY 00:05:08.320 + exit 1 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:08.320 INFO: configuration change detected. 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 2590935 ]] 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.320 11:20:42 json_config -- json_config/json_config.sh@323 -- # killprocess 2590935 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@948 -- # '[' -z 2590935 ']' 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@952 -- # kill -0 2590935 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@953 -- # uname 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2590935 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2590935' 00:05:08.320 killing process with pid 2590935 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@967 -- # kill 2590935 00:05:08.320 11:20:42 json_config -- common/autotest_common.sh@972 -- # wait 2590935 00:05:09.772 11:20:44 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.772 11:20:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.772 11:20:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.772 11:20:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.772 11:20:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:09.772 11:20:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.772 INFO: Success 00:05:09.772 00:05:09.772 real 0m16.094s 00:05:09.772 user 0m18.248s 00:05:09.772 sys 0m2.029s 00:05:09.772 11:20:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.772 11:20:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.772 ************************************ 00:05:09.772 END TEST json_config 00:05:09.772 ************************************ 00:05:10.030 11:20:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.030 11:20:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.030 11:20:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.030 11:20:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.030 11:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:10.030 ************************************ 00:05:10.030 START TEST json_config_extra_key 00:05:10.030 ************************************ 00:05:10.030 11:20:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.030 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.030 11:20:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.031 11:20:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.031 11:20:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.031 11:20:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.031 11:20:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.031 11:20:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.031 11:20:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.031 11:20:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.031 11:20:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.031 11:20:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.031 INFO: launching applications... 00:05:10.031 11:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2592354 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.031 Waiting for target to run... 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2592354 /var/tmp/spdk_tgt.sock 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2592354 ']' 00:05:10.031 11:20:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.031 11:20:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.031 [2024-07-15 11:20:44.452452] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:10.031 [2024-07-15 11:20:44.452507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592354 ] 00:05:10.031 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.290 [2024-07-15 11:20:44.754408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.550 [2024-07-15 11:20:44.836047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.117 11:20:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.117 11:20:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.117 00:05:11.117 11:20:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.117 INFO: shutting down applications... 00:05:11.117 11:20:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2592354 ]] 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2592354 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2592354 00:05:11.117 11:20:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2592354 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.685 11:20:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.685 SPDK target shutdown done 00:05:11.685 11:20:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.685 Success 00:05:11.685 00:05:11.685 real 0m1.600s 00:05:11.685 user 0m1.533s 00:05:11.685 sys 0m0.395s 00:05:11.685 11:20:45 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.685 11:20:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.685 ************************************ 00:05:11.685 END TEST json_config_extra_key 00:05:11.685 ************************************ 00:05:11.685 11:20:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.685 11:20:45 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.685 11:20:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.685 11:20:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.685 11:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:11.685 ************************************ 00:05:11.685 START TEST alias_rpc 00:05:11.685 ************************************ 00:05:11.685 11:20:45 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.685 * Looking for test storage... 00:05:11.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.685 11:20:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.685 11:20:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2592670 00:05:11.685 11:20:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2592670 00:05:11.685 11:20:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2592670 ']' 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.685 11:20:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.685 [2024-07-15 11:20:46.121227] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:11.685 [2024-07-15 11:20:46.121289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592670 ] 00:05:11.945 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.945 [2024-07-15 11:20:46.202367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.945 [2024-07-15 11:20:46.291840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.881 11:20:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.881 11:20:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2592670 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2592670 ']' 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2592670 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.881 11:20:47 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2592670 00:05:13.139 11:20:47 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.139 11:20:47 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.139 11:20:47 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2592670' 00:05:13.139 killing process with pid 2592670 00:05:13.139 11:20:47 alias_rpc -- common/autotest_common.sh@967 -- # kill 2592670 00:05:13.139 11:20:47 alias_rpc -- common/autotest_common.sh@972 -- # wait 2592670 00:05:13.398 00:05:13.398 real 0m1.721s 00:05:13.398 user 0m2.013s 00:05:13.398 sys 0m0.456s 00:05:13.398 11:20:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.398 11:20:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 ************************************ 00:05:13.398 END TEST alias_rpc 00:05:13.398 ************************************ 00:05:13.398 11:20:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.398 11:20:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:13.398 11:20:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.398 11:20:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.398 11:20:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.398 11:20:47 -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 ************************************ 00:05:13.398 START TEST spdkcli_tcp 00:05:13.398 ************************************ 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.398 * Looking for test storage... 00:05:13.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2592993 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2592993 00:05:13.398 11:20:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2592993 ']' 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.398 11:20:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.657 [2024-07-15 11:20:47.919191] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:13.657 [2024-07-15 11:20:47.919259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592993 ] 00:05:13.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.657 [2024-07-15 11:20:48.001965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.657 [2024-07-15 11:20:48.091757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.657 [2024-07-15 11:20:48.091761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.591 11:20:48 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.591 11:20:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:14.591 11:20:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2593258 00:05:14.591 11:20:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.591 11:20:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.850 [ 00:05:14.850 "bdev_malloc_delete", 00:05:14.850 "bdev_malloc_create", 00:05:14.850 "bdev_null_resize", 00:05:14.850 "bdev_null_delete", 00:05:14.850 "bdev_null_create", 00:05:14.850 "bdev_nvme_cuse_unregister", 00:05:14.850 "bdev_nvme_cuse_register", 00:05:14.850 "bdev_opal_new_user", 00:05:14.850 "bdev_opal_set_lock_state", 00:05:14.850 "bdev_opal_delete", 00:05:14.850 "bdev_opal_get_info", 00:05:14.850 "bdev_opal_create", 00:05:14.850 "bdev_nvme_opal_revert", 00:05:14.850 "bdev_nvme_opal_init", 00:05:14.850 "bdev_nvme_send_cmd", 00:05:14.850 "bdev_nvme_get_path_iostat", 00:05:14.850 "bdev_nvme_get_mdns_discovery_info", 00:05:14.850 "bdev_nvme_stop_mdns_discovery", 00:05:14.850 "bdev_nvme_start_mdns_discovery", 00:05:14.850 "bdev_nvme_set_multipath_policy", 00:05:14.850 "bdev_nvme_set_preferred_path", 00:05:14.850 "bdev_nvme_get_io_paths", 00:05:14.850 "bdev_nvme_remove_error_injection", 00:05:14.850 "bdev_nvme_add_error_injection", 00:05:14.850 "bdev_nvme_get_discovery_info", 00:05:14.850 "bdev_nvme_stop_discovery", 00:05:14.850 "bdev_nvme_start_discovery", 00:05:14.850 "bdev_nvme_get_controller_health_info", 00:05:14.850 "bdev_nvme_disable_controller", 00:05:14.850 "bdev_nvme_enable_controller", 00:05:14.850 "bdev_nvme_reset_controller", 00:05:14.850 "bdev_nvme_get_transport_statistics", 00:05:14.850 "bdev_nvme_apply_firmware", 00:05:14.850 "bdev_nvme_detach_controller", 00:05:14.850 "bdev_nvme_get_controllers", 00:05:14.850 "bdev_nvme_attach_controller", 00:05:14.850 "bdev_nvme_set_hotplug", 00:05:14.850 "bdev_nvme_set_options", 00:05:14.850 "bdev_passthru_delete", 00:05:14.850 "bdev_passthru_create", 00:05:14.850 "bdev_lvol_set_parent_bdev", 00:05:14.850 "bdev_lvol_set_parent", 00:05:14.850 "bdev_lvol_check_shallow_copy", 00:05:14.850 "bdev_lvol_start_shallow_copy", 00:05:14.850 "bdev_lvol_grow_lvstore", 00:05:14.851 "bdev_lvol_get_lvols", 00:05:14.851 "bdev_lvol_get_lvstores", 00:05:14.851 "bdev_lvol_delete", 00:05:14.851 "bdev_lvol_set_read_only", 00:05:14.851 "bdev_lvol_resize", 00:05:14.851 "bdev_lvol_decouple_parent", 00:05:14.851 "bdev_lvol_inflate", 00:05:14.851 "bdev_lvol_rename", 00:05:14.851 "bdev_lvol_clone_bdev", 00:05:14.851 "bdev_lvol_clone", 00:05:14.851 "bdev_lvol_snapshot", 00:05:14.851 "bdev_lvol_create", 00:05:14.851 "bdev_lvol_delete_lvstore", 00:05:14.851 "bdev_lvol_rename_lvstore", 00:05:14.851 "bdev_lvol_create_lvstore", 00:05:14.851 "bdev_raid_set_options", 00:05:14.851 "bdev_raid_remove_base_bdev", 00:05:14.851 "bdev_raid_add_base_bdev", 00:05:14.851 "bdev_raid_delete", 00:05:14.851 "bdev_raid_create", 00:05:14.851 "bdev_raid_get_bdevs", 00:05:14.851 "bdev_error_inject_error", 00:05:14.851 "bdev_error_delete", 00:05:14.851 "bdev_error_create", 00:05:14.851 "bdev_split_delete", 00:05:14.851 "bdev_split_create", 00:05:14.851 "bdev_delay_delete", 00:05:14.851 "bdev_delay_create", 00:05:14.851 "bdev_delay_update_latency", 00:05:14.851 "bdev_zone_block_delete", 00:05:14.851 "bdev_zone_block_create", 00:05:14.851 "blobfs_create", 00:05:14.851 "blobfs_detect", 00:05:14.851 "blobfs_set_cache_size", 00:05:14.851 "bdev_aio_delete", 00:05:14.851 "bdev_aio_rescan", 00:05:14.851 "bdev_aio_create", 00:05:14.851 "bdev_ftl_set_property", 00:05:14.851 "bdev_ftl_get_properties", 00:05:14.851 "bdev_ftl_get_stats", 00:05:14.851 "bdev_ftl_unmap", 00:05:14.851 "bdev_ftl_unload", 00:05:14.851 "bdev_ftl_delete", 00:05:14.851 "bdev_ftl_load", 00:05:14.851 "bdev_ftl_create", 00:05:14.851 "bdev_virtio_attach_controller", 00:05:14.851 "bdev_virtio_scsi_get_devices", 00:05:14.851 "bdev_virtio_detach_controller", 00:05:14.851 "bdev_virtio_blk_set_hotplug", 00:05:14.851 "bdev_iscsi_delete", 00:05:14.851 "bdev_iscsi_create", 00:05:14.851 "bdev_iscsi_set_options", 00:05:14.851 "accel_error_inject_error", 00:05:14.851 "ioat_scan_accel_module", 00:05:14.851 "dsa_scan_accel_module", 00:05:14.851 "iaa_scan_accel_module", 00:05:14.851 "vfu_virtio_create_scsi_endpoint", 00:05:14.851 "vfu_virtio_scsi_remove_target", 00:05:14.851 "vfu_virtio_scsi_add_target", 00:05:14.851 "vfu_virtio_create_blk_endpoint", 00:05:14.851 "vfu_virtio_delete_endpoint", 00:05:14.851 "keyring_file_remove_key", 00:05:14.851 "keyring_file_add_key", 00:05:14.851 "keyring_linux_set_options", 00:05:14.851 "iscsi_get_histogram", 00:05:14.851 "iscsi_enable_histogram", 00:05:14.851 "iscsi_set_options", 00:05:14.851 "iscsi_get_auth_groups", 00:05:14.851 "iscsi_auth_group_remove_secret", 00:05:14.851 "iscsi_auth_group_add_secret", 00:05:14.851 "iscsi_delete_auth_group", 00:05:14.851 "iscsi_create_auth_group", 00:05:14.851 "iscsi_set_discovery_auth", 00:05:14.851 "iscsi_get_options", 00:05:14.851 "iscsi_target_node_request_logout", 00:05:14.851 "iscsi_target_node_set_redirect", 00:05:14.851 "iscsi_target_node_set_auth", 00:05:14.851 "iscsi_target_node_add_lun", 00:05:14.851 "iscsi_get_stats", 00:05:14.851 "iscsi_get_connections", 00:05:14.851 "iscsi_portal_group_set_auth", 00:05:14.851 "iscsi_start_portal_group", 00:05:14.851 "iscsi_delete_portal_group", 00:05:14.851 "iscsi_create_portal_group", 00:05:14.851 "iscsi_get_portal_groups", 00:05:14.851 "iscsi_delete_target_node", 00:05:14.851 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.851 "iscsi_target_node_add_pg_ig_maps", 00:05:14.851 "iscsi_create_target_node", 00:05:14.851 "iscsi_get_target_nodes", 00:05:14.851 "iscsi_delete_initiator_group", 00:05:14.851 "iscsi_initiator_group_remove_initiators", 00:05:14.851 "iscsi_initiator_group_add_initiators", 00:05:14.851 "iscsi_create_initiator_group", 00:05:14.851 "iscsi_get_initiator_groups", 00:05:14.851 "nvmf_set_crdt", 00:05:14.851 "nvmf_set_config", 00:05:14.851 "nvmf_set_max_subsystems", 00:05:14.851 "nvmf_stop_mdns_prr", 00:05:14.851 "nvmf_publish_mdns_prr", 00:05:14.851 "nvmf_subsystem_get_listeners", 00:05:14.851 "nvmf_subsystem_get_qpairs", 00:05:14.851 "nvmf_subsystem_get_controllers", 00:05:14.851 "nvmf_get_stats", 00:05:14.851 "nvmf_get_transports", 00:05:14.851 "nvmf_create_transport", 00:05:14.851 "nvmf_get_targets", 00:05:14.851 "nvmf_delete_target", 00:05:14.851 "nvmf_create_target", 00:05:14.851 "nvmf_subsystem_allow_any_host", 00:05:14.851 "nvmf_subsystem_remove_host", 00:05:14.851 "nvmf_subsystem_add_host", 00:05:14.851 "nvmf_ns_remove_host", 00:05:14.851 "nvmf_ns_add_host", 00:05:14.851 "nvmf_subsystem_remove_ns", 00:05:14.851 "nvmf_subsystem_add_ns", 00:05:14.851 "nvmf_subsystem_listener_set_ana_state", 00:05:14.851 "nvmf_discovery_get_referrals", 00:05:14.851 "nvmf_discovery_remove_referral", 00:05:14.851 "nvmf_discovery_add_referral", 00:05:14.851 "nvmf_subsystem_remove_listener", 00:05:14.851 "nvmf_subsystem_add_listener", 00:05:14.851 "nvmf_delete_subsystem", 00:05:14.851 "nvmf_create_subsystem", 00:05:14.851 "nvmf_get_subsystems", 00:05:14.851 "env_dpdk_get_mem_stats", 00:05:14.851 "nbd_get_disks", 00:05:14.851 "nbd_stop_disk", 00:05:14.851 "nbd_start_disk", 00:05:14.851 "ublk_recover_disk", 00:05:14.851 "ublk_get_disks", 00:05:14.851 "ublk_stop_disk", 00:05:14.851 "ublk_start_disk", 00:05:14.851 "ublk_destroy_target", 00:05:14.851 "ublk_create_target", 00:05:14.851 "virtio_blk_create_transport", 00:05:14.851 "virtio_blk_get_transports", 00:05:14.851 "vhost_controller_set_coalescing", 00:05:14.851 "vhost_get_controllers", 00:05:14.851 "vhost_delete_controller", 00:05:14.851 "vhost_create_blk_controller", 00:05:14.851 "vhost_scsi_controller_remove_target", 00:05:14.851 "vhost_scsi_controller_add_target", 00:05:14.851 "vhost_start_scsi_controller", 00:05:14.851 "vhost_create_scsi_controller", 00:05:14.851 "thread_set_cpumask", 00:05:14.851 "framework_get_governor", 00:05:14.851 "framework_get_scheduler", 00:05:14.851 "framework_set_scheduler", 00:05:14.851 "framework_get_reactors", 00:05:14.851 "thread_get_io_channels", 00:05:14.851 "thread_get_pollers", 00:05:14.851 "thread_get_stats", 00:05:14.851 "framework_monitor_context_switch", 00:05:14.851 "spdk_kill_instance", 00:05:14.851 "log_enable_timestamps", 00:05:14.851 "log_get_flags", 00:05:14.851 "log_clear_flag", 00:05:14.851 "log_set_flag", 00:05:14.851 "log_get_level", 00:05:14.851 "log_set_level", 00:05:14.851 "log_get_print_level", 00:05:14.851 "log_set_print_level", 00:05:14.851 "framework_enable_cpumask_locks", 00:05:14.851 "framework_disable_cpumask_locks", 00:05:14.851 "framework_wait_init", 00:05:14.851 "framework_start_init", 00:05:14.851 "scsi_get_devices", 00:05:14.851 "bdev_get_histogram", 00:05:14.851 "bdev_enable_histogram", 00:05:14.851 "bdev_set_qos_limit", 00:05:14.851 "bdev_set_qd_sampling_period", 00:05:14.851 "bdev_get_bdevs", 00:05:14.851 "bdev_reset_iostat", 00:05:14.851 "bdev_get_iostat", 00:05:14.851 "bdev_examine", 00:05:14.851 "bdev_wait_for_examine", 00:05:14.851 "bdev_set_options", 00:05:14.851 "notify_get_notifications", 00:05:14.851 "notify_get_types", 00:05:14.851 "accel_get_stats", 00:05:14.851 "accel_set_options", 00:05:14.851 "accel_set_driver", 00:05:14.851 "accel_crypto_key_destroy", 00:05:14.851 "accel_crypto_keys_get", 00:05:14.851 "accel_crypto_key_create", 00:05:14.851 "accel_assign_opc", 00:05:14.851 "accel_get_module_info", 00:05:14.851 "accel_get_opc_assignments", 00:05:14.851 "vmd_rescan", 00:05:14.851 "vmd_remove_device", 00:05:14.851 "vmd_enable", 00:05:14.851 "sock_get_default_impl", 00:05:14.851 "sock_set_default_impl", 00:05:14.851 "sock_impl_set_options", 00:05:14.851 "sock_impl_get_options", 00:05:14.851 "iobuf_get_stats", 00:05:14.851 "iobuf_set_options", 00:05:14.851 "keyring_get_keys", 00:05:14.851 "framework_get_pci_devices", 00:05:14.851 "framework_get_config", 00:05:14.851 "framework_get_subsystems", 00:05:14.851 "vfu_tgt_set_base_path", 00:05:14.851 "trace_get_info", 00:05:14.851 "trace_get_tpoint_group_mask", 00:05:14.851 "trace_disable_tpoint_group", 00:05:14.851 "trace_enable_tpoint_group", 00:05:14.851 "trace_clear_tpoint_mask", 00:05:14.851 "trace_set_tpoint_mask", 00:05:14.851 "spdk_get_version", 00:05:14.851 "rpc_get_methods" 00:05:14.851 ] 00:05:14.851 11:20:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.851 11:20:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.851 11:20:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2592993 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2592993 ']' 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2592993 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2592993 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2592993' 00:05:14.851 killing process with pid 2592993 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2592993 00:05:14.851 11:20:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2592993 00:05:15.110 00:05:15.110 real 0m1.787s 00:05:15.110 user 0m3.488s 00:05:15.110 sys 0m0.483s 00:05:15.110 11:20:49 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.110 11:20:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.110 ************************************ 00:05:15.110 END TEST spdkcli_tcp 00:05:15.110 ************************************ 00:05:15.368 11:20:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.368 11:20:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.368 11:20:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.368 11:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.368 11:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:15.368 ************************************ 00:05:15.368 START TEST dpdk_mem_utility 00:05:15.368 ************************************ 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.368 * Looking for test storage... 00:05:15.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:15.368 11:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.368 11:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2593339 00:05:15.368 11:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2593339 00:05:15.368 11:20:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2593339 ']' 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.368 11:20:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.368 [2024-07-15 11:20:49.758690] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:15.369 [2024-07-15 11:20:49.758752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593339 ] 00:05:15.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.627 [2024-07-15 11:20:49.839646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.627 [2024-07-15 11:20:49.933324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.887 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.887 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:15.887 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.887 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.887 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.887 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.887 { 00:05:15.887 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.887 } 00:05:15.887 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.887 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.887 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:15.887 1 heaps totaling size 814.000000 MiB 00:05:15.887 size: 814.000000 MiB heap id: 0 00:05:15.887 end heaps---------- 00:05:15.887 8 mempools totaling size 598.116089 MiB 00:05:15.887 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.887 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.887 size: 84.521057 MiB name: bdev_io_2593339 00:05:15.887 size: 51.011292 MiB name: evtpool_2593339 00:05:15.887 size: 50.003479 MiB name: msgpool_2593339 00:05:15.887 size: 21.763794 MiB name: PDU_Pool 00:05:15.887 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.887 size: 0.026123 MiB name: Session_Pool 00:05:15.887 end mempools------- 00:05:15.887 6 memzones totaling size 4.142822 MiB 00:05:15.887 size: 1.000366 MiB name: RG_ring_0_2593339 00:05:15.887 size: 1.000366 MiB name: RG_ring_1_2593339 00:05:15.887 size: 1.000366 MiB name: RG_ring_4_2593339 00:05:15.887 size: 1.000366 MiB name: RG_ring_5_2593339 00:05:15.887 size: 0.125366 MiB name: RG_ring_2_2593339 00:05:15.887 size: 0.015991 MiB name: RG_ring_3_2593339 00:05:15.887 end memzones------- 00:05:15.887 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.147 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:16.147 list of free elements. size: 12.519348 MiB 00:05:16.147 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:16.147 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:16.147 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:16.147 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:16.147 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:16.147 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:16.147 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:16.147 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:16.147 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:16.147 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:16.147 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:16.147 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:16.147 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:16.147 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:16.147 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:16.147 list of standard malloc elements. size: 199.218079 MiB 00:05:16.147 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:16.147 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:16.147 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.147 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:16.147 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:16.147 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.147 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:16.147 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.147 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:16.147 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:16.147 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:16.147 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:16.147 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:16.147 list of memzone associated elements. size: 602.262573 MiB 00:05:16.147 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:16.147 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.147 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:16.147 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.147 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:16.147 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2593339_0 00:05:16.147 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:16.147 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2593339_0 00:05:16.147 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:16.147 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2593339_0 00:05:16.147 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:16.147 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.147 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:16.147 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.147 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:16.147 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2593339 00:05:16.147 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:16.147 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2593339 00:05:16.147 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.147 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2593339 00:05:16.147 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:16.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.147 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:16.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.147 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:16.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.147 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:16.147 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.147 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:16.147 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2593339 00:05:16.147 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:16.147 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2593339 00:05:16.147 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:16.147 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2593339 00:05:16.147 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:16.147 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2593339 00:05:16.147 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:16.147 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2593339 00:05:16.147 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:16.147 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.148 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:16.148 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.148 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:16.148 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.148 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:16.148 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2593339 00:05:16.148 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:16.148 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.148 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:16.148 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.148 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:16.148 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2593339 00:05:16.148 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:16.148 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.148 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:16.148 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2593339 00:05:16.148 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:16.148 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2593339 00:05:16.148 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:16.148 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.148 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.148 11:20:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2593339 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2593339 ']' 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2593339 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2593339 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2593339' 00:05:16.148 killing process with pid 2593339 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2593339 00:05:16.148 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2593339 00:05:16.407 00:05:16.407 real 0m1.180s 00:05:16.407 user 0m1.459s 00:05:16.407 sys 0m0.445s 00:05:16.407 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.407 11:20:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.407 ************************************ 00:05:16.407 END TEST dpdk_mem_utility 00:05:16.407 ************************************ 00:05:16.407 11:20:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.407 11:20:50 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.407 11:20:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.407 11:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.407 11:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.407 ************************************ 00:05:16.407 START TEST event 00:05:16.407 ************************************ 00:05:16.407 11:20:50 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.666 * Looking for test storage... 00:05:16.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.666 11:20:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:16.666 11:20:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.666 11:20:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.666 11:20:50 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:16.666 11:20:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.666 11:20:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.666 ************************************ 00:05:16.666 START TEST event_perf 00:05:16.666 ************************************ 00:05:16.666 11:20:50 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.666 Running I/O for 1 seconds...[2024-07-15 11:20:50.998422] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:16.666 [2024-07-15 11:20:50.998482] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593650 ] 00:05:16.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.666 [2024-07-15 11:20:51.078931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.925 [2024-07-15 11:20:51.171201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.925 [2024-07-15 11:20:51.171321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.925 [2024-07-15 11:20:51.171694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.925 [2024-07-15 11:20:51.171697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.862 Running I/O for 1 seconds... 00:05:17.862 lcore 0: 103913 00:05:17.862 lcore 1: 103915 00:05:17.862 lcore 2: 103918 00:05:17.862 lcore 3: 103915 00:05:17.862 done. 00:05:17.862 00:05:17.862 real 0m1.272s 00:05:17.862 user 0m4.168s 00:05:17.862 sys 0m0.094s 00:05:17.862 11:20:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.862 11:20:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.862 ************************************ 00:05:17.862 END TEST event_perf 00:05:17.862 ************************************ 00:05:17.862 11:20:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.862 11:20:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.862 11:20:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:17.862 11:20:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.862 11:20:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.862 ************************************ 00:05:17.862 START TEST event_reactor 00:05:17.862 ************************************ 00:05:17.862 11:20:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.862 [2024-07-15 11:20:52.326176] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:17.862 [2024-07-15 11:20:52.326241] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593933 ] 00:05:18.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.120 [2024-07-15 11:20:52.405352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.120 [2024-07-15 11:20:52.491283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.496 test_start 00:05:19.496 oneshot 00:05:19.496 tick 100 00:05:19.496 tick 100 00:05:19.496 tick 250 00:05:19.496 tick 100 00:05:19.496 tick 100 00:05:19.496 tick 250 00:05:19.496 tick 100 00:05:19.496 tick 500 00:05:19.496 tick 100 00:05:19.496 tick 100 00:05:19.496 tick 250 00:05:19.496 tick 100 00:05:19.496 tick 100 00:05:19.496 test_end 00:05:19.496 00:05:19.496 real 0m1.263s 00:05:19.496 user 0m1.172s 00:05:19.496 sys 0m0.085s 00:05:19.496 11:20:53 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.496 11:20:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:19.496 ************************************ 00:05:19.496 END TEST event_reactor 00:05:19.496 ************************************ 00:05:19.496 11:20:53 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.496 11:20:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.496 11:20:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:19.496 11:20:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.496 11:20:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.496 ************************************ 00:05:19.496 START TEST event_reactor_perf 00:05:19.496 ************************************ 00:05:19.496 11:20:53 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.496 [2024-07-15 11:20:53.659590] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:19.496 [2024-07-15 11:20:53.659644] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594217 ] 00:05:19.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.496 [2024-07-15 11:20:53.741244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.496 [2024-07-15 11:20:53.826869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.873 test_start 00:05:20.873 test_end 00:05:20.873 Performance: 313891 events per second 00:05:20.873 00:05:20.873 real 0m1.268s 00:05:20.873 user 0m1.172s 00:05:20.873 sys 0m0.092s 00:05:20.873 11:20:54 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.873 11:20:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.873 ************************************ 00:05:20.873 END TEST event_reactor_perf 00:05:20.873 ************************************ 00:05:20.873 11:20:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.873 11:20:54 event -- event/event.sh@49 -- # uname -s 00:05:20.873 11:20:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.873 11:20:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.873 11:20:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.873 11:20:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.873 11:20:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.873 ************************************ 00:05:20.873 START TEST event_scheduler 00:05:20.873 ************************************ 00:05:20.873 11:20:54 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.873 * Looking for test storage... 00:05:20.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:20.873 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.873 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2594523 00:05:20.873 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.873 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.873 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2594523 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2594523 ']' 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.873 11:20:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.874 [2024-07-15 11:20:55.140520] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:20.874 [2024-07-15 11:20:55.140633] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594523 ] 00:05:20.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.874 [2024-07-15 11:20:55.287752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.133 [2024-07-15 11:20:55.456431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.133 [2024-07-15 11:20:55.456471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.133 [2024-07-15 11:20:55.456591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.133 [2024-07-15 11:20:55.456599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:21.133 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.133 [2024-07-15 11:20:55.570090] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:21.133 [2024-07-15 11:20:55.570133] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:21.133 [2024-07-15 11:20:55.570159] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:21.133 [2024-07-15 11:20:55.570177] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:21.133 [2024-07-15 11:20:55.570193] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.133 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.133 11:20:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 [2024-07-15 11:20:55.693418] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:21.393 11:20:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:21.393 11:20:55 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.393 11:20:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 ************************************ 00:05:21.393 START TEST scheduler_create_thread 00:05:21.393 ************************************ 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 2 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 3 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 4 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 5 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 6 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 7 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 8 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 9 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.393 10 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.393 11:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.961 11:20:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.961 11:20:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.961 11:20:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.961 11:20:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.961 11:20:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.898 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.898 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.898 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.898 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.835 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.835 11:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.835 11:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.835 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.835 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.772 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.772 00:05:24.772 real 0m3.233s 00:05:24.772 user 0m0.026s 00:05:24.772 sys 0m0.004s 00:05:24.772 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.772 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.772 ************************************ 00:05:24.772 END TEST scheduler_create_thread 00:05:24.772 ************************************ 00:05:24.772 11:20:58 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:24.772 11:20:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.772 11:20:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2594523 00:05:24.772 11:20:58 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2594523 ']' 00:05:24.772 11:20:58 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2594523 00:05:24.772 11:20:58 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2594523 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2594523' 00:05:24.772 killing process with pid 2594523 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2594523 00:05:24.772 11:20:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2594523 00:05:25.032 [2024-07-15 11:20:59.344202] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.291 00:05:25.291 real 0m4.729s 00:05:25.291 user 0m8.451s 00:05:25.291 sys 0m0.502s 00:05:25.291 11:20:59 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.291 11:20:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.291 ************************************ 00:05:25.291 END TEST event_scheduler 00:05:25.291 ************************************ 00:05:25.291 11:20:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:25.291 11:20:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.291 11:20:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.291 11:20:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.291 11:20:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.291 11:20:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.550 ************************************ 00:05:25.550 START TEST app_repeat 00:05:25.550 ************************************ 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2595367 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2595367' 00:05:25.550 Process app_repeat pid: 2595367 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.550 spdk_app_start Round 0 00:05:25.550 11:20:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2595367 /var/tmp/spdk-nbd.sock 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2595367 ']' 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.550 11:20:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.550 [2024-07-15 11:20:59.821396] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:25.550 [2024-07-15 11:20:59.821461] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595367 ] 00:05:25.550 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.550 [2024-07-15 11:20:59.907153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.550 [2024-07-15 11:20:59.997414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.550 [2024-07-15 11:20:59.997419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.810 11:21:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.810 11:21:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:25.810 11:21:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.810 Malloc0 00:05:26.069 11:21:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.327 Malloc1 00:05:26.327 11:21:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.327 11:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.586 /dev/nbd0 00:05:26.586 11:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.586 11:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.586 1+0 records in 00:05:26.586 1+0 records out 00:05:26.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186302 s, 22.0 MB/s 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.586 11:21:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.586 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.586 11:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.586 11:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.845 /dev/nbd1 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.845 1+0 records in 00:05:26.845 1+0 records out 00:05:26.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240103 s, 17.1 MB/s 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.845 11:21:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.845 11:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.104 { 00:05:27.104 "nbd_device": "/dev/nbd0", 00:05:27.104 "bdev_name": "Malloc0" 00:05:27.104 }, 00:05:27.104 { 00:05:27.104 "nbd_device": "/dev/nbd1", 00:05:27.104 "bdev_name": "Malloc1" 00:05:27.104 } 00:05:27.104 ]' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.104 { 00:05:27.104 "nbd_device": "/dev/nbd0", 00:05:27.104 "bdev_name": "Malloc0" 00:05:27.104 }, 00:05:27.104 { 00:05:27.104 "nbd_device": "/dev/nbd1", 00:05:27.104 "bdev_name": "Malloc1" 00:05:27.104 } 00:05:27.104 ]' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.104 /dev/nbd1' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.104 /dev/nbd1' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.104 256+0 records in 00:05:27.104 256+0 records out 00:05:27.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104987 s, 99.9 MB/s 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.104 256+0 records in 00:05:27.104 256+0 records out 00:05:27.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199028 s, 52.7 MB/s 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.104 256+0 records in 00:05:27.104 256+0 records out 00:05:27.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213191 s, 49.2 MB/s 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.104 11:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.364 11:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.623 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.882 11:21:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.882 11:21:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.140 11:21:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.399 [2024-07-15 11:21:02.736555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.399 [2024-07-15 11:21:02.818061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.399 [2024-07-15 11:21:02.818066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.399 [2024-07-15 11:21:02.862482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.399 [2024-07-15 11:21:02.862527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.689 11:21:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.689 11:21:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.689 spdk_app_start Round 1 00:05:31.689 11:21:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2595367 /var/tmp/spdk-nbd.sock 00:05:31.689 11:21:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2595367 ']' 00:05:31.689 11:21:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.689 11:21:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.689 11:21:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.689 11:21:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.690 11:21:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.690 11:21:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.690 11:21:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:31.690 11:21:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.690 Malloc0 00:05:31.690 11:21:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.948 Malloc1 00:05:31.948 11:21:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.948 11:21:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.207 /dev/nbd0 00:05:32.207 11:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.207 11:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.207 1+0 records in 00:05:32.207 1+0 records out 00:05:32.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192619 s, 21.3 MB/s 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.207 11:21:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.207 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.207 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.207 11:21:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.465 /dev/nbd1 00:05:32.465 11:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.465 11:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.465 11:21:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:32.465 11:21:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.465 11:21:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.465 11:21:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.465 11:21:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.466 1+0 records in 00:05:32.466 1+0 records out 00:05:32.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231456 s, 17.7 MB/s 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.466 11:21:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.466 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.466 11:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.466 11:21:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.466 11:21:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.466 11:21:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.723 11:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.723 { 00:05:32.723 "nbd_device": "/dev/nbd0", 00:05:32.723 "bdev_name": "Malloc0" 00:05:32.724 }, 00:05:32.724 { 00:05:32.724 "nbd_device": "/dev/nbd1", 00:05:32.724 "bdev_name": "Malloc1" 00:05:32.724 } 00:05:32.724 ]' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.724 { 00:05:32.724 "nbd_device": "/dev/nbd0", 00:05:32.724 "bdev_name": "Malloc0" 00:05:32.724 }, 00:05:32.724 { 00:05:32.724 "nbd_device": "/dev/nbd1", 00:05:32.724 "bdev_name": "Malloc1" 00:05:32.724 } 00:05:32.724 ]' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.724 /dev/nbd1' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.724 /dev/nbd1' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.724 256+0 records in 00:05:32.724 256+0 records out 00:05:32.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104387 s, 100 MB/s 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.724 256+0 records in 00:05:32.724 256+0 records out 00:05:32.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200272 s, 52.4 MB/s 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.724 11:21:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.982 256+0 records in 00:05:32.982 256+0 records out 00:05:32.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209444 s, 50.1 MB/s 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.982 11:21:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.983 11:21:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.242 11:21:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.501 11:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.760 11:21:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.760 11:21:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.020 11:21:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.278 [2024-07-15 11:21:08.559921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.278 [2024-07-15 11:21:08.643193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.278 [2024-07-15 11:21:08.643198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.278 [2024-07-15 11:21:08.688591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.278 [2024-07-15 11:21:08.688639] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.566 11:21:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.566 11:21:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.566 spdk_app_start Round 2 00:05:37.566 11:21:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2595367 /var/tmp/spdk-nbd.sock 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2595367 ']' 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.566 11:21:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:37.566 11:21:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.566 Malloc0 00:05:37.566 11:21:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.825 Malloc1 00:05:37.825 11:21:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.825 11:21:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.084 /dev/nbd0 00:05:38.084 11:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.084 11:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.084 11:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.085 1+0 records in 00:05:38.085 1+0 records out 00:05:38.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224283 s, 18.3 MB/s 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.085 11:21:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.085 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.085 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.085 11:21:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.344 /dev/nbd1 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.344 1+0 records in 00:05:38.344 1+0 records out 00:05:38.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245172 s, 16.7 MB/s 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.344 11:21:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.344 11:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.604 11:21:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.604 { 00:05:38.604 "nbd_device": "/dev/nbd0", 00:05:38.604 "bdev_name": "Malloc0" 00:05:38.604 }, 00:05:38.604 { 00:05:38.604 "nbd_device": "/dev/nbd1", 00:05:38.604 "bdev_name": "Malloc1" 00:05:38.604 } 00:05:38.604 ]' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.604 { 00:05:38.604 "nbd_device": "/dev/nbd0", 00:05:38.604 "bdev_name": "Malloc0" 00:05:38.604 }, 00:05:38.604 { 00:05:38.604 "nbd_device": "/dev/nbd1", 00:05:38.604 "bdev_name": "Malloc1" 00:05:38.604 } 00:05:38.604 ]' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.604 /dev/nbd1' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.604 /dev/nbd1' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.604 11:21:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.604 256+0 records in 00:05:38.604 256+0 records out 00:05:38.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102968 s, 102 MB/s 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.863 256+0 records in 00:05:38.863 256+0 records out 00:05:38.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196587 s, 53.3 MB/s 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.863 256+0 records in 00:05:38.863 256+0 records out 00:05:38.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209316 s, 50.1 MB/s 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.863 11:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.122 11:21:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.381 11:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.641 11:21:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.641 11:21:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.901 11:21:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.161 [2024-07-15 11:21:14.393724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.161 [2024-07-15 11:21:14.475447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.161 [2024-07-15 11:21:14.475452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.161 [2024-07-15 11:21:14.520780] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.161 [2024-07-15 11:21:14.520815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.454 11:21:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2595367 /var/tmp/spdk-nbd.sock 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2595367 ']' 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:43.454 11:21:17 event.app_repeat -- event/event.sh@39 -- # killprocess 2595367 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2595367 ']' 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2595367 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2595367 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2595367' 00:05:43.454 killing process with pid 2595367 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2595367 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2595367 00:05:43.454 spdk_app_start is called in Round 0. 00:05:43.454 Shutdown signal received, stop current app iteration 00:05:43.454 Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 reinitialization... 00:05:43.454 spdk_app_start is called in Round 1. 00:05:43.454 Shutdown signal received, stop current app iteration 00:05:43.454 Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 reinitialization... 00:05:43.454 spdk_app_start is called in Round 2. 00:05:43.454 Shutdown signal received, stop current app iteration 00:05:43.454 Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 reinitialization... 00:05:43.454 spdk_app_start is called in Round 3. 00:05:43.454 Shutdown signal received, stop current app iteration 00:05:43.454 11:21:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.454 11:21:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.454 00:05:43.454 real 0m17.909s 00:05:43.454 user 0m39.830s 00:05:43.454 sys 0m2.861s 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.454 11:21:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.454 ************************************ 00:05:43.454 END TEST app_repeat 00:05:43.454 ************************************ 00:05:43.454 11:21:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.454 11:21:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.454 11:21:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:43.454 11:21:17 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.454 11:21:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.454 11:21:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.454 ************************************ 00:05:43.454 START TEST cpu_locks 00:05:43.454 ************************************ 00:05:43.454 11:21:17 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:43.454 * Looking for test storage... 00:05:43.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:43.454 11:21:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.454 11:21:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.454 11:21:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.454 11:21:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.454 11:21:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.454 11:21:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.454 11:21:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.454 ************************************ 00:05:43.454 START TEST default_locks 00:05:43.454 ************************************ 00:05:43.454 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:43.454 11:21:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2599494 00:05:43.454 11:21:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2599494 00:05:43.454 11:21:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.454 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2599494 ']' 00:05:43.455 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.455 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.455 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.455 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.455 11:21:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.774 [2024-07-15 11:21:17.932337] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:43.774 [2024-07-15 11:21:17.932395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599494 ] 00:05:43.774 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.774 [2024-07-15 11:21:18.011597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.774 [2024-07-15 11:21:18.102041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.060 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.060 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:44.060 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2599494 00:05:44.060 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2599494 00:05:44.060 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.319 lslocks: write error 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2599494 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2599494 ']' 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2599494 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2599494 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2599494' 00:05:44.319 killing process with pid 2599494 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2599494 00:05:44.319 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2599494 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2599494 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2599494 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2599494 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2599494 ']' 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2599494) - No such process 00:05:44.578 ERROR: process (pid: 2599494) is no longer running 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.578 00:05:44.578 real 0m1.120s 00:05:44.578 user 0m1.104s 00:05:44.578 sys 0m0.488s 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.578 11:21:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 ************************************ 00:05:44.578 END TEST default_locks 00:05:44.578 ************************************ 00:05:44.578 11:21:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.578 11:21:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.578 11:21:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.578 11:21:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.578 11:21:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.836 ************************************ 00:05:44.836 START TEST default_locks_via_rpc 00:05:44.836 ************************************ 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2599793 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2599793 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2599793 ']' 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.836 11:21:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.836 [2024-07-15 11:21:19.111435] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:44.836 [2024-07-15 11:21:19.111488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599793 ] 00:05:44.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.836 [2024-07-15 11:21:19.192891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.836 [2024-07-15 11:21:19.282212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2599793 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2599793 00:05:46.213 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2599793 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2599793 ']' 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2599793 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2599793 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2599793' 00:05:46.473 killing process with pid 2599793 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2599793 00:05:46.473 11:21:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2599793 00:05:47.041 00:05:47.041 real 0m2.161s 00:05:47.041 user 0m2.590s 00:05:47.041 sys 0m0.667s 00:05:47.041 11:21:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.041 11:21:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 ************************************ 00:05:47.041 END TEST default_locks_via_rpc 00:05:47.041 ************************************ 00:05:47.041 11:21:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.041 11:21:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.041 11:21:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.041 11:21:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.041 11:21:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 ************************************ 00:05:47.041 START TEST non_locking_app_on_locked_coremask 00:05:47.041 ************************************ 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2600093 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2600093 /var/tmp/spdk.sock 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2600093 ']' 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.041 11:21:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 [2024-07-15 11:21:21.337744] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:47.041 [2024-07-15 11:21:21.337795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600093 ] 00:05:47.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.041 [2024-07-15 11:21:21.418773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.300 [2024-07-15 11:21:21.508880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2600358 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2600358 /var/tmp/spdk2.sock 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2600358 ']' 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.236 11:21:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.236 [2024-07-15 11:21:22.617128] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:48.236 [2024-07-15 11:21:22.617239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600358 ] 00:05:48.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.495 [2024-07-15 11:21:22.760728] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.495 [2024-07-15 11:21:22.760757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.495 [2024-07-15 11:21:22.936581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.062 11:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.062 11:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.062 11:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2600093 00:05:49.062 11:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.062 11:21:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2600093 00:05:49.999 lslocks: write error 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2600093 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2600093 ']' 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2600093 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2600093 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2600093' 00:05:49.999 killing process with pid 2600093 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2600093 00:05:49.999 11:21:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2600093 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2600358 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2600358 ']' 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2600358 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2600358 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2600358' 00:05:50.936 killing process with pid 2600358 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2600358 00:05:50.936 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2600358 00:05:51.195 00:05:51.195 real 0m4.207s 00:05:51.195 user 0m4.952s 00:05:51.195 sys 0m1.199s 00:05:51.195 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.195 11:21:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.195 ************************************ 00:05:51.195 END TEST non_locking_app_on_locked_coremask 00:05:51.195 ************************************ 00:05:51.195 11:21:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.195 11:21:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.195 11:21:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.195 11:21:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.195 11:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.195 ************************************ 00:05:51.195 START TEST locking_app_on_unlocked_coremask 00:05:51.195 ************************************ 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2600916 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2600916 /var/tmp/spdk.sock 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2600916 ']' 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.195 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.196 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.196 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.196 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.196 [2024-07-15 11:21:25.615695] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:51.196 [2024-07-15 11:21:25.615753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600916 ] 00:05:51.196 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.455 [2024-07-15 11:21:25.698582] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.455 [2024-07-15 11:21:25.698612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.455 [2024-07-15 11:21:25.785395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2601178 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2601178 /var/tmp/spdk2.sock 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2601178 ']' 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.023 11:21:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.282 [2024-07-15 11:21:26.525358] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:52.282 [2024-07-15 11:21:26.525419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601178 ] 00:05:52.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.282 [2024-07-15 11:21:26.636746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.541 [2024-07-15 11:21:26.812328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.106 11:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.106 11:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.106 11:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2601178 00:05:53.106 11:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2601178 00:05:53.106 11:21:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.672 lslocks: write error 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2600916 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2600916 ']' 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2600916 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2600916 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2600916' 00:05:53.673 killing process with pid 2600916 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2600916 00:05:53.673 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2600916 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2601178 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2601178 ']' 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2601178 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601178 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601178' 00:05:54.607 killing process with pid 2601178 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2601178 00:05:54.607 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2601178 00:05:54.866 00:05:54.866 real 0m3.568s 00:05:54.866 user 0m3.989s 00:05:54.866 sys 0m1.010s 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.866 ************************************ 00:05:54.866 END TEST locking_app_on_unlocked_coremask 00:05:54.866 ************************************ 00:05:54.866 11:21:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.866 11:21:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.866 11:21:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.866 11:21:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.866 11:21:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.866 ************************************ 00:05:54.866 START TEST locking_app_on_locked_coremask 00:05:54.866 ************************************ 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2601733 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2601733 /var/tmp/spdk.sock 00:05:54.866 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2601733 ']' 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.867 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.867 [2024-07-15 11:21:29.241347] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:54.867 [2024-07-15 11:21:29.241400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601733 ] 00:05:54.867 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.867 [2024-07-15 11:21:29.322975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.125 [2024-07-15 11:21:29.413434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2601742 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2601742 /var/tmp/spdk2.sock 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2601742 /var/tmp/spdk2.sock 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.383 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2601742 /var/tmp/spdk2.sock 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2601742 ']' 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.384 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.384 [2024-07-15 11:21:29.721867] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:55.384 [2024-07-15 11:21:29.721978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601742 ] 00:05:55.384 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.642 [2024-07-15 11:21:29.862099] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2601733 has claimed it. 00:05:55.642 [2024-07-15 11:21:29.862141] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2601742) - No such process 00:05:55.900 ERROR: process (pid: 2601742) is no longer running 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2601733 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2601733 00:05:55.900 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.466 lslocks: write error 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2601733 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2601733 ']' 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2601733 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601733 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.466 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.467 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601733' 00:05:56.467 killing process with pid 2601733 00:05:56.467 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2601733 00:05:56.467 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2601733 00:05:57.032 00:05:57.032 real 0m2.009s 00:05:57.032 user 0m2.226s 00:05:57.032 sys 0m0.709s 00:05:57.032 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.032 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 ************************************ 00:05:57.032 END TEST locking_app_on_locked_coremask 00:05:57.032 ************************************ 00:05:57.032 11:21:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:57.032 11:21:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.032 11:21:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.032 11:21:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.032 11:21:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 ************************************ 00:05:57.032 START TEST locking_overlapped_coremask 00:05:57.032 ************************************ 00:05:57.032 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:57.032 11:21:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2602041 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2602041 /var/tmp/spdk.sock 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2602041 ']' 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.033 11:21:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 [2024-07-15 11:21:31.329497] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:57.033 [2024-07-15 11:21:31.329558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602041 ] 00:05:57.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.033 [2024-07-15 11:21:31.411688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.291 [2024-07-15 11:21:31.500908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.291 [2024-07-15 11:21:31.501020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.291 [2024-07-15 11:21:31.501021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2602298 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2602298 /var/tmp/spdk2.sock 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2602298 /var/tmp/spdk2.sock 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2602298 /var/tmp/spdk2.sock 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2602298 ']' 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.858 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.858 [2024-07-15 11:21:32.226566] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:57.858 [2024-07-15 11:21:32.226613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602298 ] 00:05:57.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.116 [2024-07-15 11:21:32.406514] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2602041 has claimed it. 00:05:58.116 [2024-07-15 11:21:32.406603] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2602298) - No such process 00:05:58.682 ERROR: process (pid: 2602298) is no longer running 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2602041 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2602041 ']' 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2602041 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2602041 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2602041' 00:05:58.682 killing process with pid 2602041 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2602041 00:05:58.682 11:21:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2602041 00:05:58.939 00:05:58.939 real 0m2.060s 00:05:58.939 user 0m5.766s 00:05:58.939 sys 0m0.477s 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 END TEST locking_overlapped_coremask 00:05:58.939 ************************************ 00:05:58.939 11:21:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.939 11:21:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.939 11:21:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.939 11:21:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.939 11:21:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 START TEST locking_overlapped_coremask_via_rpc 00:05:58.939 ************************************ 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2602594 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2602594 /var/tmp/spdk.sock 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2602594 ']' 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.939 11:21:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.197 [2024-07-15 11:21:33.455725] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:05:59.197 [2024-07-15 11:21:33.455780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602594 ] 00:05:59.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.197 [2024-07-15 11:21:33.537991] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.197 [2024-07-15 11:21:33.538026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.197 [2024-07-15 11:21:33.630277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.197 [2024-07-15 11:21:33.630392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.197 [2024-07-15 11:21:33.630394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2602610 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2602610 /var/tmp/spdk2.sock 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2602610 ']' 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.135 11:21:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.135 [2024-07-15 11:21:34.375830] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:00.135 [2024-07-15 11:21:34.375875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602610 ] 00:06:00.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.135 [2024-07-15 11:21:34.556011] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.135 [2024-07-15 11:21:34.556070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.394 [2024-07-15 11:21:34.852214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.394 [2024-07-15 11:21:34.855309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.394 [2024-07-15 11:21:34.855314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.961 [2024-07-15 11:21:35.387452] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2602594 has claimed it. 00:06:00.961 request: 00:06:00.961 { 00:06:00.961 "method": "framework_enable_cpumask_locks", 00:06:00.961 "req_id": 1 00:06:00.961 } 00:06:00.961 Got JSON-RPC error response 00:06:00.961 response: 00:06:00.961 { 00:06:00.961 "code": -32603, 00:06:00.961 "message": "Failed to claim CPU core: 2" 00:06:00.961 } 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2602594 /var/tmp/spdk.sock 00:06:00.961 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2602594 ']' 00:06:00.962 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.962 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.962 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.962 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.962 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2602610 /var/tmp/spdk2.sock 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2602610 ']' 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.220 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.478 00:06:01.478 real 0m2.338s 00:06:01.478 user 0m0.879s 00:06:01.478 sys 0m0.178s 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.478 11:21:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.478 ************************************ 00:06:01.478 END TEST locking_overlapped_coremask_via_rpc 00:06:01.478 ************************************ 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.478 11:21:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.478 11:21:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2602594 ]] 00:06:01.478 11:21:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2602594 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2602594 ']' 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2602594 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2602594 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2602594' 00:06:01.478 killing process with pid 2602594 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2602594 00:06:01.478 11:21:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2602594 00:06:01.737 11:21:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2602610 ]] 00:06:01.737 11:21:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2602610 00:06:01.737 11:21:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2602610 ']' 00:06:01.737 11:21:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2602610 00:06:01.737 11:21:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.737 11:21:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.737 11:21:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2602610 00:06:01.997 11:21:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:01.997 11:21:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:01.997 11:21:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2602610' 00:06:01.997 killing process with pid 2602610 00:06:01.997 11:21:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2602610 00:06:01.997 11:21:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2602610 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2602594 ]] 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2602594 00:06:02.566 11:21:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2602594 ']' 00:06:02.566 11:21:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2602594 00:06:02.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2602594) - No such process 00:06:02.566 11:21:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2602594 is not found' 00:06:02.566 Process with pid 2602594 is not found 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2602610 ]] 00:06:02.566 11:21:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2602610 00:06:02.566 11:21:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2602610 ']' 00:06:02.567 11:21:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2602610 00:06:02.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2602610) - No such process 00:06:02.567 11:21:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2602610 is not found' 00:06:02.567 Process with pid 2602610 is not found 00:06:02.567 11:21:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.567 00:06:02.567 real 0m19.036s 00:06:02.567 user 0m33.046s 00:06:02.567 sys 0m5.746s 00:06:02.567 11:21:36 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.567 11:21:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 ************************************ 00:06:02.567 END TEST cpu_locks 00:06:02.567 ************************************ 00:06:02.567 11:21:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.567 00:06:02.567 real 0m45.966s 00:06:02.567 user 1m28.008s 00:06:02.567 sys 0m9.736s 00:06:02.567 11:21:36 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.567 11:21:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 ************************************ 00:06:02.567 END TEST event 00:06:02.567 ************************************ 00:06:02.567 11:21:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.567 11:21:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.567 11:21:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.567 11:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.567 11:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 ************************************ 00:06:02.567 START TEST thread 00:06:02.567 ************************************ 00:06:02.567 11:21:36 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.567 * Looking for test storage... 00:06:02.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.567 11:21:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.567 11:21:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.567 11:21:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.567 11:21:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.826 ************************************ 00:06:02.826 START TEST thread_poller_perf 00:06:02.826 ************************************ 00:06:02.826 11:21:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.826 [2024-07-15 11:21:37.057839] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:02.826 [2024-07-15 11:21:37.057907] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603226 ] 00:06:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.826 [2024-07-15 11:21:37.139005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.826 [2024-07-15 11:21:37.227409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.826 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.204 ====================================== 00:06:04.204 busy:2212012362 (cyc) 00:06:04.204 total_run_count: 255000 00:06:04.204 tsc_hz: 2200000000 (cyc) 00:06:04.204 ====================================== 00:06:04.204 poller_cost: 8674 (cyc), 3942 (nsec) 00:06:04.204 00:06:04.204 real 0m1.282s 00:06:04.204 user 0m1.186s 00:06:04.204 sys 0m0.091s 00:06:04.204 11:21:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.205 11:21:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 END TEST thread_poller_perf 00:06:04.205 ************************************ 00:06:04.205 11:21:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.205 11:21:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.205 11:21:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.205 11:21:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.205 11:21:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 START TEST thread_poller_perf 00:06:04.205 ************************************ 00:06:04.205 11:21:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.205 [2024-07-15 11:21:38.413780] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:04.205 [2024-07-15 11:21:38.413887] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603510 ] 00:06:04.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.205 [2024-07-15 11:21:38.529433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.205 [2024-07-15 11:21:38.625413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.205 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.584 ====================================== 00:06:05.584 busy:2202066682 (cyc) 00:06:05.584 total_run_count: 3375000 00:06:05.584 tsc_hz: 2200000000 (cyc) 00:06:05.584 ====================================== 00:06:05.584 poller_cost: 652 (cyc), 296 (nsec) 00:06:05.584 00:06:05.584 real 0m1.316s 00:06:05.584 user 0m1.188s 00:06:05.584 sys 0m0.121s 00:06:05.584 11:21:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.584 11:21:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.584 ************************************ 00:06:05.584 END TEST thread_poller_perf 00:06:05.584 ************************************ 00:06:05.584 11:21:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.584 11:21:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.584 00:06:05.584 real 0m2.832s 00:06:05.584 user 0m2.464s 00:06:05.584 sys 0m0.374s 00:06:05.584 11:21:39 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.584 11:21:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.584 ************************************ 00:06:05.584 END TEST thread 00:06:05.584 ************************************ 00:06:05.584 11:21:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.584 11:21:39 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.584 11:21:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.584 11:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.584 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.584 ************************************ 00:06:05.584 START TEST accel 00:06:05.584 ************************************ 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.584 * Looking for test storage... 00:06:05.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:05.584 11:21:39 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:05.584 11:21:39 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:05.584 11:21:39 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.584 11:21:39 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2603833 00:06:05.584 11:21:39 accel -- accel/accel.sh@63 -- # waitforlisten 2603833 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@829 -- # '[' -z 2603833 ']' 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.584 11:21:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.584 11:21:39 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.584 11:21:39 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:05.584 11:21:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.584 11:21:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.584 11:21:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.584 11:21:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.584 11:21:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.584 11:21:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.584 11:21:39 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.584 [2024-07-15 11:21:39.958288] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:05.584 [2024-07-15 11:21:39.958347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603833 ] 00:06:05.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.584 [2024-07-15 11:21:40.039544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.843 [2024-07-15 11:21:40.137991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.780 11:21:41 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.780 11:21:41 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.781 11:21:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.781 11:21:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.781 11:21:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.781 11:21:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.781 11:21:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.781 11:21:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.781 11:21:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.781 11:21:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.781 11:21:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.781 11:21:41 accel -- accel/accel.sh@75 -- # killprocess 2603833 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@948 -- # '[' -z 2603833 ']' 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@952 -- # kill -0 2603833 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@953 -- # uname 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.781 11:21:41 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2603833 00:06:07.039 11:21:41 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.039 11:21:41 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.039 11:21:41 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2603833' 00:06:07.039 killing process with pid 2603833 00:06:07.039 11:21:41 accel -- common/autotest_common.sh@967 -- # kill 2603833 00:06:07.039 11:21:41 accel -- common/autotest_common.sh@972 -- # wait 2603833 00:06:07.298 11:21:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:07.298 11:21:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.298 11:21:41 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:07.298 11:21:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:07.298 11:21:41 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.298 11:21:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.298 11:21:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.298 11:21:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.298 ************************************ 00:06:07.298 START TEST accel_missing_filename 00:06:07.298 ************************************ 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.298 11:21:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:07.298 11:21:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:07.298 [2024-07-15 11:21:41.747806] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:07.298 [2024-07-15 11:21:41.747865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604189 ] 00:06:07.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.558 [2024-07-15 11:21:41.829583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.558 [2024-07-15 11:21:41.920575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.558 [2024-07-15 11:21:41.965270] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.817 [2024-07-15 11:21:42.028301] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:07.817 A filename is required. 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.817 00:06:07.817 real 0m0.389s 00:06:07.817 user 0m0.290s 00:06:07.817 sys 0m0.140s 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.817 11:21:42 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:07.817 ************************************ 00:06:07.817 END TEST accel_missing_filename 00:06:07.817 ************************************ 00:06:07.817 11:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.817 11:21:42 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.817 11:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.817 11:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.817 11:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.817 ************************************ 00:06:07.817 START TEST accel_compress_verify 00:06:07.817 ************************************ 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.817 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.817 11:21:42 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.817 [2024-07-15 11:21:42.210168] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:07.817 [2024-07-15 11:21:42.210237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604406 ] 00:06:07.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.076 [2024-07-15 11:21:42.294734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.076 [2024-07-15 11:21:42.382768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.076 [2024-07-15 11:21:42.427805] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.076 [2024-07-15 11:21:42.491068] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:08.335 00:06:08.335 Compression does not support the verify option, aborting. 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.335 00:06:08.335 real 0m0.393s 00:06:08.335 user 0m0.299s 00:06:08.335 sys 0m0.139s 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.335 11:21:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.335 ************************************ 00:06:08.335 END TEST accel_compress_verify 00:06:08.335 ************************************ 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.335 11:21:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.335 ************************************ 00:06:08.335 START TEST accel_wrong_workload 00:06:08.335 ************************************ 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:08.335 11:21:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:08.335 Unsupported workload type: foobar 00:06:08.335 [2024-07-15 11:21:42.668478] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:08.335 accel_perf options: 00:06:08.335 [-h help message] 00:06:08.335 [-q queue depth per core] 00:06:08.335 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.335 [-T number of threads per core 00:06:08.335 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.335 [-t time in seconds] 00:06:08.335 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.335 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.335 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.335 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.335 [-S for crc32c workload, use this seed value (default 0) 00:06:08.335 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.335 [-f for fill workload, use this BYTE value (default 255) 00:06:08.335 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.335 [-y verify result if this switch is on] 00:06:08.335 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.335 Can be used to spread operations across a wider range of memory. 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.335 00:06:08.335 real 0m0.033s 00:06:08.335 user 0m0.022s 00:06:08.335 sys 0m0.010s 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.335 11:21:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:08.335 ************************************ 00:06:08.335 END TEST accel_wrong_workload 00:06:08.335 ************************************ 00:06:08.335 Error: writing output failed: Broken pipe 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.335 11:21:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.335 11:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.335 ************************************ 00:06:08.335 START TEST accel_negative_buffers 00:06:08.335 ************************************ 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:08.335 11:21:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:08.335 -x option must be non-negative. 00:06:08.335 [2024-07-15 11:21:42.770522] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:08.335 accel_perf options: 00:06:08.335 [-h help message] 00:06:08.335 [-q queue depth per core] 00:06:08.335 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.335 [-T number of threads per core 00:06:08.335 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.335 [-t time in seconds] 00:06:08.335 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.335 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.335 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.335 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.335 [-S for crc32c workload, use this seed value (default 0) 00:06:08.335 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.335 [-f for fill workload, use this BYTE value (default 255) 00:06:08.335 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.335 [-y verify result if this switch is on] 00:06:08.335 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.335 Can be used to spread operations across a wider range of memory. 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.335 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.335 00:06:08.335 real 0m0.031s 00:06:08.335 user 0m0.019s 00:06:08.335 sys 0m0.012s 00:06:08.336 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.336 11:21:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:08.336 ************************************ 00:06:08.336 END TEST accel_negative_buffers 00:06:08.336 ************************************ 00:06:08.336 Error: writing output failed: Broken pipe 00:06:08.594 11:21:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.594 11:21:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:08.594 11:21:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.594 11:21:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.594 11:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.594 ************************************ 00:06:08.594 START TEST accel_crc32c 00:06:08.594 ************************************ 00:06:08.594 11:21:42 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.594 11:21:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.594 [2024-07-15 11:21:42.876111] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:08.594 [2024-07-15 11:21:42.876217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604474 ] 00:06:08.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.594 [2024-07-15 11:21:42.994474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.853 [2024-07-15 11:21:43.092400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.853 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.854 11:21:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:10.229 11:21:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.229 00:06:10.229 real 0m1.443s 00:06:10.229 user 0m1.289s 00:06:10.229 sys 0m0.159s 00:06:10.229 11:21:44 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.229 11:21:44 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:10.229 ************************************ 00:06:10.229 END TEST accel_crc32c 00:06:10.229 ************************************ 00:06:10.229 11:21:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.229 11:21:44 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:10.229 11:21:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.229 11:21:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.229 11:21:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.229 ************************************ 00:06:10.229 START TEST accel_crc32c_C2 00:06:10.229 ************************************ 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:10.229 [2024-07-15 11:21:44.380722] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:10.229 [2024-07-15 11:21:44.380774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604754 ] 00:06:10.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.229 [2024-07-15 11:21:44.461853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.229 [2024-07-15 11:21:44.550508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.229 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.230 11:21:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.606 00:06:11.606 real 0m1.391s 00:06:11.606 user 0m1.255s 00:06:11.606 sys 0m0.141s 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.606 11:21:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:11.606 ************************************ 00:06:11.606 END TEST accel_crc32c_C2 00:06:11.606 ************************************ 00:06:11.606 11:21:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.606 11:21:45 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:11.606 11:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.606 11:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.606 11:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.606 ************************************ 00:06:11.606 START TEST accel_copy 00:06:11.606 ************************************ 00:06:11.606 11:21:45 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:11.606 11:21:45 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:11.606 [2024-07-15 11:21:45.842240] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:11.606 [2024-07-15 11:21:45.842314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605039 ] 00:06:11.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.607 [2024-07-15 11:21:45.923566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.607 [2024-07-15 11:21:46.009088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.607 11:21:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:12.984 11:21:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.984 00:06:12.984 real 0m1.388s 00:06:12.984 user 0m1.261s 00:06:12.984 sys 0m0.132s 00:06:12.984 11:21:47 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.984 11:21:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.984 ************************************ 00:06:12.984 END TEST accel_copy 00:06:12.984 ************************************ 00:06:12.984 11:21:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.984 11:21:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.984 11:21:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.984 11:21:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.984 11:21:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.984 ************************************ 00:06:12.984 START TEST accel_fill 00:06:12.984 ************************************ 00:06:12.984 11:21:47 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:12.984 11:21:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:12.984 [2024-07-15 11:21:47.299993] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:12.984 [2024-07-15 11:21:47.300048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605316 ] 00:06:12.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.984 [2024-07-15 11:21:47.381456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.244 [2024-07-15 11:21:47.469936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.244 11:21:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:14.623 11:21:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.623 00:06:14.623 real 0m1.393s 00:06:14.623 user 0m1.268s 00:06:14.623 sys 0m0.130s 00:06:14.623 11:21:48 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.623 11:21:48 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:14.623 ************************************ 00:06:14.623 END TEST accel_fill 00:06:14.623 ************************************ 00:06:14.623 11:21:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.623 11:21:48 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:14.623 11:21:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.623 11:21:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.623 11:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.623 ************************************ 00:06:14.623 START TEST accel_copy_crc32c 00:06:14.623 ************************************ 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.623 [2024-07-15 11:21:48.759500] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:14.623 [2024-07-15 11:21:48.759565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605604 ] 00:06:14.623 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.623 [2024-07-15 11:21:48.831185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.623 [2024-07-15 11:21:48.917586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:14.623 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.624 11:21:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.052 00:06:16.052 real 0m1.381s 00:06:16.052 user 0m1.254s 00:06:16.052 sys 0m0.134s 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.052 11:21:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:16.052 ************************************ 00:06:16.052 END TEST accel_copy_crc32c 00:06:16.052 ************************************ 00:06:16.052 11:21:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.052 11:21:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.052 11:21:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.052 11:21:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.052 11:21:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.052 ************************************ 00:06:16.052 START TEST accel_copy_crc32c_C2 00:06:16.052 ************************************ 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:16.052 [2024-07-15 11:21:50.206345] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:16.052 [2024-07-15 11:21:50.206398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605885 ] 00:06:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.052 [2024-07-15 11:21:50.283149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.052 [2024-07-15 11:21:50.373631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.052 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.053 11:21:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.431 00:06:17.431 real 0m1.387s 00:06:17.431 user 0m1.263s 00:06:17.431 sys 0m0.131s 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.431 11:21:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 ************************************ 00:06:17.431 END TEST accel_copy_crc32c_C2 00:06:17.431 ************************************ 00:06:17.431 11:21:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.431 11:21:51 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:17.431 11:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.431 11:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.431 11:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 ************************************ 00:06:17.431 START TEST accel_dualcast 00:06:17.431 ************************************ 00:06:17.431 11:21:51 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:17.431 [2024-07-15 11:21:51.660581] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:17.431 [2024-07-15 11:21:51.660649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606170 ] 00:06:17.431 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.431 [2024-07-15 11:21:51.743061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.431 [2024-07-15 11:21:51.831908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.431 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.432 11:21:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.809 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:18.810 11:21:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.810 00:06:18.810 real 0m1.394s 00:06:18.810 user 0m1.259s 00:06:18.810 sys 0m0.140s 00:06:18.810 11:21:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.810 11:21:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:18.810 ************************************ 00:06:18.810 END TEST accel_dualcast 00:06:18.810 ************************************ 00:06:18.810 11:21:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.810 11:21:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:18.810 11:21:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.810 11:21:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.810 11:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.810 ************************************ 00:06:18.810 START TEST accel_compare 00:06:18.810 ************************************ 00:06:18.810 11:21:53 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:18.810 11:21:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:18.810 [2024-07-15 11:21:53.120899] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:18.810 [2024-07-15 11:21:53.121003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606450 ] 00:06:18.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.810 [2024-07-15 11:21:53.237090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.068 [2024-07-15 11:21:53.334526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.068 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.069 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.069 11:21:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.069 11:21:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.069 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.069 11:21:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:20.445 11:21:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.445 00:06:20.445 real 0m1.440s 00:06:20.445 user 0m1.273s 00:06:20.445 sys 0m0.172s 00:06:20.445 11:21:54 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.445 11:21:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:20.445 ************************************ 00:06:20.445 END TEST accel_compare 00:06:20.445 ************************************ 00:06:20.445 11:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.445 11:21:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:20.445 11:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.445 11:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.445 11:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.445 ************************************ 00:06:20.445 START TEST accel_xor 00:06:20.445 ************************************ 00:06:20.445 11:21:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:20.445 [2024-07-15 11:21:54.618611] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:20.445 [2024-07-15 11:21:54.618669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606729 ] 00:06:20.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.445 [2024-07-15 11:21:54.698885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.445 [2024-07-15 11:21:54.787236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.445 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.446 11:21:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.823 11:21:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.823 00:06:21.823 real 0m1.389s 00:06:21.823 user 0m1.263s 00:06:21.823 sys 0m0.131s 00:06:21.823 11:21:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.823 11:21:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 ************************************ 00:06:21.823 END TEST accel_xor 00:06:21.823 ************************************ 00:06:21.823 11:21:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.823 11:21:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:21.823 11:21:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.823 11:21:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.823 11:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 ************************************ 00:06:21.823 START TEST accel_xor 00:06:21.823 ************************************ 00:06:21.823 11:21:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.823 [2024-07-15 11:21:56.066966] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:21.823 [2024-07-15 11:21:56.067016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607014 ] 00:06:21.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.823 [2024-07-15 11:21:56.146197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.823 [2024-07-15 11:21:56.233896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.823 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.082 11:21:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.028 11:21:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.028 00:06:23.028 real 0m1.383s 00:06:23.028 user 0m1.261s 00:06:23.028 sys 0m0.128s 00:06:23.028 11:21:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.028 11:21:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:23.028 ************************************ 00:06:23.028 END TEST accel_xor 00:06:23.028 ************************************ 00:06:23.028 11:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.028 11:21:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:23.028 11:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.028 11:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.028 11:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.288 ************************************ 00:06:23.288 START TEST accel_dif_verify 00:06:23.288 ************************************ 00:06:23.288 11:21:57 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:23.288 [2024-07-15 11:21:57.519945] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:23.288 [2024-07-15 11:21:57.519998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607294 ] 00:06:23.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.288 [2024-07-15 11:21:57.602106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.288 [2024-07-15 11:21:57.689362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 11:21:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:24.665 11:21:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.665 00:06:24.665 real 0m1.392s 00:06:24.665 user 0m1.262s 00:06:24.665 sys 0m0.137s 00:06:24.665 11:21:58 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.665 11:21:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.665 ************************************ 00:06:24.665 END TEST accel_dif_verify 00:06:24.665 ************************************ 00:06:24.665 11:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.665 11:21:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.665 11:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.665 11:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.665 11:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.665 ************************************ 00:06:24.665 START TEST accel_dif_generate 00:06:24.665 ************************************ 00:06:24.665 11:21:58 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:24.665 11:21:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:24.665 [2024-07-15 11:21:58.969130] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:24.665 [2024-07-15 11:21:58.969185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607577 ] 00:06:24.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.665 [2024-07-15 11:21:59.050688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.932 [2024-07-15 11:21:59.138511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.932 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.933 11:21:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.868 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:26.127 11:22:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.127 00:06:26.127 real 0m1.386s 00:06:26.127 user 0m1.266s 00:06:26.127 sys 0m0.127s 00:06:26.127 11:22:00 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.127 11:22:00 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:26.127 ************************************ 00:06:26.127 END TEST accel_dif_generate 00:06:26.127 ************************************ 00:06:26.127 11:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.127 11:22:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:26.127 11:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:26.127 11:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.127 11:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.127 ************************************ 00:06:26.127 START TEST accel_dif_generate_copy 00:06:26.127 ************************************ 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:26.127 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:26.127 [2024-07-15 11:22:00.428523] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:26.127 [2024-07-15 11:22:00.428626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607858 ] 00:06:26.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.127 [2024-07-15 11:22:00.544670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.386 [2024-07-15 11:22:00.641274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.386 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.387 11:22:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.764 00:06:27.764 real 0m1.440s 00:06:27.764 user 0m1.278s 00:06:27.764 sys 0m0.168s 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.764 11:22:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.764 ************************************ 00:06:27.764 END TEST accel_dif_generate_copy 00:06:27.764 ************************************ 00:06:27.764 11:22:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.764 11:22:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:27.764 11:22:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.764 11:22:01 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.764 11:22:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.764 11:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.764 ************************************ 00:06:27.764 START TEST accel_comp 00:06:27.764 ************************************ 00:06:27.764 11:22:01 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:27.764 11:22:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:27.764 [2024-07-15 11:22:01.929958] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:27.764 [2024-07-15 11:22:01.930029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608145 ] 00:06:27.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.764 [2024-07-15 11:22:02.011673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.764 [2024-07-15 11:22:02.099981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.764 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.765 11:22:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:29.142 11:22:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.142 00:06:29.142 real 0m1.396s 00:06:29.142 user 0m1.267s 00:06:29.142 sys 0m0.136s 00:06:29.142 11:22:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.142 11:22:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:29.142 ************************************ 00:06:29.142 END TEST accel_comp 00:06:29.142 ************************************ 00:06:29.142 11:22:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.142 11:22:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.142 11:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:29.142 11:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.142 11:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.142 ************************************ 00:06:29.142 START TEST accel_decomp 00:06:29.142 ************************************ 00:06:29.142 11:22:03 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.142 11:22:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:29.142 [2024-07-15 11:22:03.393099] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:29.142 [2024-07-15 11:22:03.393215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608423 ] 00:06:29.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.142 [2024-07-15 11:22:03.510156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.142 [2024-07-15 11:22:03.605327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.400 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.401 11:22:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.336 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.595 11:22:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.595 00:06:30.595 real 0m1.442s 00:06:30.595 user 0m1.272s 00:06:30.595 sys 0m0.177s 00:06:30.595 11:22:04 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.595 11:22:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:30.595 ************************************ 00:06:30.595 END TEST accel_decomp 00:06:30.595 ************************************ 00:06:30.595 11:22:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.595 11:22:04 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.595 11:22:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:30.595 11:22:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.595 11:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.595 ************************************ 00:06:30.595 START TEST accel_decomp_full 00:06:30.595 ************************************ 00:06:30.595 11:22:04 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:30.595 11:22:04 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:30.595 [2024-07-15 11:22:04.898015] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:30.595 [2024-07-15 11:22:04.898076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608723 ] 00:06:30.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.595 [2024-07-15 11:22:04.980545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.853 [2024-07-15 11:22:05.069872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.853 11:22:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.227 11:22:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.227 00:06:32.227 real 0m1.407s 00:06:32.227 user 0m1.276s 00:06:32.227 sys 0m0.138s 00:06:32.227 11:22:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.227 11:22:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:32.227 ************************************ 00:06:32.227 END TEST accel_decomp_full 00:06:32.227 ************************************ 00:06:32.227 11:22:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.227 11:22:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.227 11:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:32.227 11:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.227 11:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.227 ************************************ 00:06:32.227 START TEST accel_decomp_mcore 00:06:32.227 ************************************ 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:32.227 [2024-07-15 11:22:06.375399] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:32.227 [2024-07-15 11:22:06.375457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609012 ] 00:06:32.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.227 [2024-07-15 11:22:06.458005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.227 [2024-07-15 11:22:06.548538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.227 [2024-07-15 11:22:06.548653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.227 [2024-07-15 11:22:06.548742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.227 [2024-07-15 11:22:06.548743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.227 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.228 11:22:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.602 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.603 00:06:33.603 real 0m1.415s 00:06:33.603 user 0m4.643s 00:06:33.603 sys 0m0.155s 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.603 11:22:07 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:33.603 ************************************ 00:06:33.603 END TEST accel_decomp_mcore 00:06:33.603 ************************************ 00:06:33.603 11:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.603 11:22:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.603 11:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:33.603 11:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.603 11:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.603 ************************************ 00:06:33.603 START TEST accel_decomp_full_mcore 00:06:33.603 ************************************ 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.603 11:22:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.603 [2024-07-15 11:22:07.859362] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:33.603 [2024-07-15 11:22:07.859427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609315 ] 00:06:33.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.603 [2024-07-15 11:22:07.944004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.603 [2024-07-15 11:22:08.039134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.603 [2024-07-15 11:22:08.039248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.603 [2024-07-15 11:22:08.039361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.603 [2024-07-15 11:22:08.039362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 11:22:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.260 00:06:35.260 real 0m1.465s 00:06:35.260 user 0m4.820s 00:06:35.260 sys 0m0.160s 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.260 11:22:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.260 ************************************ 00:06:35.260 END TEST accel_decomp_full_mcore 00:06:35.260 ************************************ 00:06:35.260 11:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.260 11:22:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.260 11:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:35.260 11:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.260 11:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.260 ************************************ 00:06:35.260 START TEST accel_decomp_mthread 00:06:35.260 ************************************ 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:35.260 [2024-07-15 11:22:09.392765] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:35.260 [2024-07-15 11:22:09.392818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609655 ] 00:06:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.260 [2024-07-15 11:22:09.473495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.260 [2024-07-15 11:22:09.566751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.260 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.261 11:22:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.638 00:06:36.638 real 0m1.406s 00:06:36.638 user 0m1.286s 00:06:36.638 sys 0m0.135s 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.638 11:22:10 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:36.638 ************************************ 00:06:36.638 END TEST accel_decomp_mthread 00:06:36.638 ************************************ 00:06:36.638 11:22:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.638 11:22:10 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.638 11:22:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:36.638 11:22:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.638 11:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.638 ************************************ 00:06:36.638 START TEST accel_decomp_full_mthread 00:06:36.638 ************************************ 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.638 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.639 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.639 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.639 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.639 11:22:10 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.639 [2024-07-15 11:22:10.866038] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:36.639 [2024-07-15 11:22:10.866095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609946 ] 00:06:36.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.639 [2024-07-15 11:22:10.946117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.639 [2024-07-15 11:22:11.035877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.639 11:22:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.016 00:06:38.016 real 0m1.425s 00:06:38.016 user 0m1.300s 00:06:38.016 sys 0m0.140s 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.016 11:22:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.016 ************************************ 00:06:38.016 END TEST accel_decomp_full_mthread 00:06:38.016 ************************************ 00:06:38.016 11:22:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.016 11:22:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:38.016 11:22:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:38.016 11:22:12 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:38.016 11:22:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:38.016 11:22:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.016 11:22:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.016 11:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.016 11:22:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.016 11:22:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.016 11:22:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.016 11:22:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.016 11:22:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:38.016 11:22:12 accel -- accel/accel.sh@41 -- # jq -r . 00:06:38.016 ************************************ 00:06:38.016 START TEST accel_dif_functional_tests 00:06:38.016 ************************************ 00:06:38.016 11:22:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:38.017 [2024-07-15 11:22:12.382603] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:38.017 [2024-07-15 11:22:12.382656] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610256 ] 00:06:38.017 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.017 [2024-07-15 11:22:12.464040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.275 [2024-07-15 11:22:12.555333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.275 [2024-07-15 11:22:12.555446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.275 [2024-07-15 11:22:12.555448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.275 00:06:38.275 00:06:38.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.275 http://cunit.sourceforge.net/ 00:06:38.275 00:06:38.275 00:06:38.275 Suite: accel_dif 00:06:38.275 Test: verify: DIF generated, GUARD check ...passed 00:06:38.275 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.275 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.275 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:22:12.629232] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.275 passed 00:06:38.275 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:22:12.629307] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.275 passed 00:06:38.275 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:22:12.629340] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.275 passed 00:06:38.275 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.275 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:22:12.629409] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.275 passed 00:06:38.275 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.275 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.275 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.275 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:22:12.629566] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.275 passed 00:06:38.275 Test: verify copy: DIF generated, GUARD check ...passed 00:06:38.275 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:38.275 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:38.275 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:22:12.629737] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.275 passed 00:06:38.275 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:22:12.629771] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.275 passed 00:06:38.275 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:22:12.629802] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.275 passed 00:06:38.275 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.275 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.275 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.275 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.275 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.275 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.275 Test: generate copy: iovecs-len validate ...[2024-07-15 11:22:12.630057] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.275 passed 00:06:38.275 Test: generate copy: buffer alignment validate ...passed 00:06:38.275 00:06:38.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.275 suites 1 1 n/a 0 0 00:06:38.275 tests 26 26 26 0 0 00:06:38.275 asserts 115 115 115 0 n/a 00:06:38.275 00:06:38.275 Elapsed time = 0.002 seconds 00:06:38.534 00:06:38.534 real 0m0.483s 00:06:38.534 user 0m0.691s 00:06:38.534 sys 0m0.170s 00:06:38.534 11:22:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.534 11:22:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:38.534 ************************************ 00:06:38.534 END TEST accel_dif_functional_tests 00:06:38.534 ************************************ 00:06:38.534 11:22:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.534 00:06:38.534 real 0m33.042s 00:06:38.534 user 0m36.628s 00:06:38.534 sys 0m4.980s 00:06:38.534 11:22:12 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.534 11:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.534 ************************************ 00:06:38.534 END TEST accel 00:06:38.534 ************************************ 00:06:38.534 11:22:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.534 11:22:12 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.534 11:22:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.534 11:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.534 11:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.534 ************************************ 00:06:38.534 START TEST accel_rpc 00:06:38.534 ************************************ 00:06:38.534 11:22:12 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.792 * Looking for test storage... 00:06:38.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:38.792 11:22:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.792 11:22:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2610441 00:06:38.792 11:22:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2610441 00:06:38.792 11:22:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2610441 ']' 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.792 11:22:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.792 [2024-07-15 11:22:13.112666] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:38.792 [2024-07-15 11:22:13.112772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610441 ] 00:06:38.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.792 [2024-07-15 11:22:13.230227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.050 [2024-07-15 11:22:13.321208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.616 11:22:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.616 11:22:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.616 11:22:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:39.616 11:22:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:39.616 11:22:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:39.616 11:22:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:39.616 11:22:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:39.616 11:22:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.616 11:22:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.616 11:22:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.616 ************************************ 00:06:39.616 START TEST accel_assign_opcode 00:06:39.616 ************************************ 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.616 [2024-07-15 11:22:14.063483] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.616 [2024-07-15 11:22:14.071496] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.616 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.874 software 00:06:39.874 00:06:39.874 real 0m0.253s 00:06:39.874 user 0m0.050s 00:06:39.874 sys 0m0.010s 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.874 11:22:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.874 ************************************ 00:06:39.874 END TEST accel_assign_opcode 00:06:39.874 ************************************ 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:40.132 11:22:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2610441 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2610441 ']' 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2610441 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2610441 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2610441' 00:06:40.132 killing process with pid 2610441 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 2610441 00:06:40.132 11:22:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 2610441 00:06:40.389 00:06:40.389 real 0m1.802s 00:06:40.389 user 0m1.957s 00:06:40.389 sys 0m0.506s 00:06:40.389 11:22:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.389 11:22:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.389 ************************************ 00:06:40.389 END TEST accel_rpc 00:06:40.389 ************************************ 00:06:40.389 11:22:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.389 11:22:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.389 11:22:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.389 11:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.389 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.389 ************************************ 00:06:40.389 START TEST app_cmdline 00:06:40.389 ************************************ 00:06:40.389 11:22:14 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.646 * Looking for test storage... 00:06:40.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:40.646 11:22:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.646 11:22:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2610776 00:06:40.646 11:22:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2610776 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2610776 ']' 00:06:40.646 11:22:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.646 11:22:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.646 [2024-07-15 11:22:14.984581] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:06:40.646 [2024-07-15 11:22:14.984686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610776 ] 00:06:40.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.646 [2024-07-15 11:22:15.102086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.904 [2024-07-15 11:22:15.193218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.161 11:22:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.161 11:22:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:41.161 11:22:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.420 { 00:06:41.420 "version": "SPDK v24.09-pre git sha1 e85883441", 00:06:41.420 "fields": { 00:06:41.420 "major": 24, 00:06:41.420 "minor": 9, 00:06:41.420 "patch": 0, 00:06:41.420 "suffix": "-pre", 00:06:41.420 "commit": "e85883441" 00:06:41.420 } 00:06:41.420 } 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.420 11:22:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:41.420 11:22:15 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.678 request: 00:06:41.678 { 00:06:41.678 "method": "env_dpdk_get_mem_stats", 00:06:41.678 "req_id": 1 00:06:41.678 } 00:06:41.678 Got JSON-RPC error response 00:06:41.678 response: 00:06:41.678 { 00:06:41.678 "code": -32601, 00:06:41.678 "message": "Method not found" 00:06:41.678 } 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.678 11:22:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2610776 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2610776 ']' 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2610776 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.678 11:22:15 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2610776 00:06:41.678 11:22:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.678 11:22:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.678 11:22:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2610776' 00:06:41.678 killing process with pid 2610776 00:06:41.678 11:22:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 2610776 00:06:41.678 11:22:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 2610776 00:06:41.936 00:06:41.936 real 0m1.560s 00:06:41.936 user 0m2.119s 00:06:41.936 sys 0m0.515s 00:06:41.936 11:22:16 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.936 11:22:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 ************************************ 00:06:41.936 END TEST app_cmdline 00:06:41.936 ************************************ 00:06:41.936 11:22:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.936 11:22:16 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:41.936 11:22:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.936 11:22:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.936 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.194 ************************************ 00:06:42.194 START TEST version 00:06:42.194 ************************************ 00:06:42.194 11:22:16 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:42.194 * Looking for test storage... 00:06:42.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.194 11:22:16 version -- app/version.sh@17 -- # get_header_version major 00:06:42.194 11:22:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # cut -f2 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.194 11:22:16 version -- app/version.sh@17 -- # major=24 00:06:42.194 11:22:16 version -- app/version.sh@18 -- # get_header_version minor 00:06:42.194 11:22:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # cut -f2 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.194 11:22:16 version -- app/version.sh@18 -- # minor=9 00:06:42.194 11:22:16 version -- app/version.sh@19 -- # get_header_version patch 00:06:42.194 11:22:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # cut -f2 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.194 11:22:16 version -- app/version.sh@19 -- # patch=0 00:06:42.194 11:22:16 version -- app/version.sh@20 -- # get_header_version suffix 00:06:42.194 11:22:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # cut -f2 00:06:42.194 11:22:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.194 11:22:16 version -- app/version.sh@20 -- # suffix=-pre 00:06:42.194 11:22:16 version -- app/version.sh@22 -- # version=24.9 00:06:42.194 11:22:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.194 11:22:16 version -- app/version.sh@28 -- # version=24.9rc0 00:06:42.194 11:22:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.194 11:22:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.194 11:22:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:42.194 11:22:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:42.194 00:06:42.194 real 0m0.171s 00:06:42.194 user 0m0.089s 00:06:42.194 sys 0m0.122s 00:06:42.194 11:22:16 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.194 11:22:16 version -- common/autotest_common.sh@10 -- # set +x 00:06:42.194 ************************************ 00:06:42.194 END TEST version 00:06:42.194 ************************************ 00:06:42.194 11:22:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.194 11:22:16 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:42.194 11:22:16 -- spdk/autotest.sh@198 -- # uname -s 00:06:42.194 11:22:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:42.194 11:22:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:42.194 11:22:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:42.194 11:22:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:42.194 11:22:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:42.194 11:22:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:42.194 11:22:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.194 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.453 11:22:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:42.453 11:22:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:42.453 11:22:16 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:42.453 11:22:16 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:42.453 11:22:16 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:42.453 11:22:16 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:42.453 11:22:16 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.453 11:22:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.453 11:22:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.453 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.453 ************************************ 00:06:42.453 START TEST nvmf_tcp 00:06:42.454 ************************************ 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.454 * Looking for test storage... 00:06:42.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.454 11:22:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.454 11:22:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.454 11:22:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.454 11:22:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.454 11:22:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.454 11:22:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.454 11:22:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:42.454 11:22:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:42.454 11:22:16 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.454 11:22:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 ************************************ 00:06:42.454 START TEST nvmf_example 00:06:42.454 ************************************ 00:06:42.454 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:42.713 * Looking for test storage... 00:06:42.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.714 11:22:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.284 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:49.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:49.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:49.285 Found net devices under 0000:af:00.0: cvl_0_0 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:49.285 Found net devices under 0000:af:00.1: cvl_0_1 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:06:49.285 00:06:49.285 --- 10.0.0.2 ping statistics --- 00:06:49.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.285 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:06:49.285 00:06:49.285 --- 10.0.0.1 ping statistics --- 00:06:49.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.285 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2614541 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2614541 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2614541 ']' 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.285 11:22:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:49.286 11:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:49.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.323 Initializing NVMe Controllers 00:06:59.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:59.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:59.323 Initialization complete. Launching workers. 00:06:59.323 ======================================================== 00:06:59.323 Latency(us) 00:06:59.323 Device Information : IOPS MiB/s Average min max 00:06:59.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10950.82 42.78 5844.25 821.67 17155.49 00:06:59.323 ======================================================== 00:06:59.323 Total : 10950.82 42.78 5844.25 821.67 17155.49 00:06:59.323 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.323 rmmod nvme_tcp 00:06:59.323 rmmod nvme_fabrics 00:06:59.323 rmmod nvme_keyring 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2614541 ']' 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2614541 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2614541 ']' 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2614541 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:59.323 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614541 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614541' 00:06:59.324 killing process with pid 2614541 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2614541 00:06:59.324 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2614541 00:06:59.582 nvmf threads initialize successfully 00:06:59.582 bdev subsystem init successfully 00:06:59.582 created a nvmf target service 00:06:59.582 create targets's poll groups done 00:06:59.582 all subsystems of target started 00:06:59.582 nvmf target is running 00:06:59.582 all subsystems of target stopped 00:06:59.582 destroy targets's poll groups done 00:06:59.582 destroyed the nvmf target service 00:06:59.582 bdev subsystem finish successfully 00:06:59.582 nvmf threads destroy successfully 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.582 11:22:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.121 11:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.121 11:22:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:02.121 11:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.121 11:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.121 00:07:02.121 real 0m19.165s 00:07:02.121 user 0m44.375s 00:07:02.121 sys 0m5.701s 00:07:02.121 11:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.121 11:22:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.121 ************************************ 00:07:02.121 END TEST nvmf_example 00:07:02.121 ************************************ 00:07:02.121 11:22:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:02.121 11:22:36 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:02.121 11:22:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.121 11:22:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.121 11:22:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.121 ************************************ 00:07:02.121 START TEST nvmf_filesystem 00:07:02.121 ************************************ 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:02.121 * Looking for test storage... 00:07:02.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:02.121 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:02.122 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:02.122 #define SPDK_CONFIG_H 00:07:02.122 #define SPDK_CONFIG_APPS 1 00:07:02.122 #define SPDK_CONFIG_ARCH native 00:07:02.122 #undef SPDK_CONFIG_ASAN 00:07:02.122 #undef SPDK_CONFIG_AVAHI 00:07:02.122 #undef SPDK_CONFIG_CET 00:07:02.122 #define SPDK_CONFIG_COVERAGE 1 00:07:02.122 #define SPDK_CONFIG_CROSS_PREFIX 00:07:02.122 #undef SPDK_CONFIG_CRYPTO 00:07:02.122 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:02.122 #undef SPDK_CONFIG_CUSTOMOCF 00:07:02.122 #undef SPDK_CONFIG_DAOS 00:07:02.122 #define SPDK_CONFIG_DAOS_DIR 00:07:02.122 #define SPDK_CONFIG_DEBUG 1 00:07:02.122 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:02.122 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:02.122 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:02.122 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:02.122 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:02.122 #undef SPDK_CONFIG_DPDK_UADK 00:07:02.122 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:02.122 #define SPDK_CONFIG_EXAMPLES 1 00:07:02.122 #undef SPDK_CONFIG_FC 00:07:02.122 #define SPDK_CONFIG_FC_PATH 00:07:02.122 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:02.122 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:02.122 #undef SPDK_CONFIG_FUSE 00:07:02.122 #undef SPDK_CONFIG_FUZZER 00:07:02.122 #define SPDK_CONFIG_FUZZER_LIB 00:07:02.122 #undef SPDK_CONFIG_GOLANG 00:07:02.122 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:02.122 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:02.122 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:02.122 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:02.122 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:02.122 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:02.122 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:02.122 #define SPDK_CONFIG_IDXD 1 00:07:02.122 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:02.122 #undef SPDK_CONFIG_IPSEC_MB 00:07:02.122 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:02.122 #define SPDK_CONFIG_ISAL 1 00:07:02.123 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:02.123 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:02.123 #define SPDK_CONFIG_LIBDIR 00:07:02.123 #undef SPDK_CONFIG_LTO 00:07:02.123 #define SPDK_CONFIG_MAX_LCORES 128 00:07:02.123 #define SPDK_CONFIG_NVME_CUSE 1 00:07:02.123 #undef SPDK_CONFIG_OCF 00:07:02.123 #define SPDK_CONFIG_OCF_PATH 00:07:02.123 #define SPDK_CONFIG_OPENSSL_PATH 00:07:02.123 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:02.123 #define SPDK_CONFIG_PGO_DIR 00:07:02.123 #undef SPDK_CONFIG_PGO_USE 00:07:02.123 #define SPDK_CONFIG_PREFIX /usr/local 00:07:02.123 #undef SPDK_CONFIG_RAID5F 00:07:02.123 #undef SPDK_CONFIG_RBD 00:07:02.123 #define SPDK_CONFIG_RDMA 1 00:07:02.123 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:02.123 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:02.123 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:02.123 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:02.123 #define SPDK_CONFIG_SHARED 1 00:07:02.123 #undef SPDK_CONFIG_SMA 00:07:02.123 #define SPDK_CONFIG_TESTS 1 00:07:02.123 #undef SPDK_CONFIG_TSAN 00:07:02.123 #define SPDK_CONFIG_UBLK 1 00:07:02.123 #define SPDK_CONFIG_UBSAN 1 00:07:02.123 #undef SPDK_CONFIG_UNIT_TESTS 00:07:02.123 #undef SPDK_CONFIG_URING 00:07:02.123 #define SPDK_CONFIG_URING_PATH 00:07:02.123 #undef SPDK_CONFIG_URING_ZNS 00:07:02.123 #undef SPDK_CONFIG_USDT 00:07:02.123 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:02.123 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:02.123 #define SPDK_CONFIG_VFIO_USER 1 00:07:02.123 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:02.123 #define SPDK_CONFIG_VHOST 1 00:07:02.123 #define SPDK_CONFIG_VIRTIO 1 00:07:02.123 #undef SPDK_CONFIG_VTUNE 00:07:02.123 #define SPDK_CONFIG_VTUNE_DIR 00:07:02.123 #define SPDK_CONFIG_WERROR 1 00:07:02.123 #define SPDK_CONFIG_WPDK_DIR 00:07:02.123 #undef SPDK_CONFIG_XNVME 00:07:02.123 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:02.123 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:02.124 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2617068 ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2617068 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.SeD42g 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SeD42g/tests/target /tmp/spdk.SeD42g 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954339328 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330090496 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83746865152 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501482496 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10754617344 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47195103232 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18890862592 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900299776 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9437184 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249981440 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250743296 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=761856 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450143744 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450147840 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:02.125 * Looking for test storage... 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83746865152 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12969209856 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.125 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.126 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.127 11:22:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.699 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.700 11:22:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:08.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:08.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:08.700 Found net devices under 0000:af:00.0: cvl_0_0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:08.700 Found net devices under 0000:af:00.1: cvl_0_1 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:08.700 00:07:08.700 --- 10.0.0.2 ping statistics --- 00:07:08.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.700 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:07:08.700 00:07:08.700 --- 10.0.0.1 ping statistics --- 00:07:08.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.700 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.700 ************************************ 00:07:08.700 START TEST nvmf_filesystem_no_in_capsule 00:07:08.700 ************************************ 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2620234 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2620234 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2620234 ']' 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.700 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.701 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.701 11:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.701 [2024-07-15 11:22:42.407048] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:07:08.701 [2024-07-15 11:22:42.407100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.701 [2024-07-15 11:22:42.492643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.701 [2024-07-15 11:22:42.586137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.701 [2024-07-15 11:22:42.586181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.701 [2024-07-15 11:22:42.586191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.701 [2024-07-15 11:22:42.586200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.701 [2024-07-15 11:22:42.586208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.701 [2024-07-15 11:22:42.586267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.701 [2024-07-15 11:22:42.586346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.701 [2024-07-15 11:22:42.586456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.701 [2024-07-15 11:22:42.586457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.958 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.958 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.959 [2024-07-15 11:22:43.393079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.959 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.216 Malloc1 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.216 [2024-07-15 11:22:43.548817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:09.216 { 00:07:09.216 "name": "Malloc1", 00:07:09.216 "aliases": [ 00:07:09.216 "daf0b3d5-7f21-4d17-8d0e-121f38f90489" 00:07:09.216 ], 00:07:09.216 "product_name": "Malloc disk", 00:07:09.216 "block_size": 512, 00:07:09.216 "num_blocks": 1048576, 00:07:09.216 "uuid": "daf0b3d5-7f21-4d17-8d0e-121f38f90489", 00:07:09.216 "assigned_rate_limits": { 00:07:09.216 "rw_ios_per_sec": 0, 00:07:09.216 "rw_mbytes_per_sec": 0, 00:07:09.216 "r_mbytes_per_sec": 0, 00:07:09.216 "w_mbytes_per_sec": 0 00:07:09.216 }, 00:07:09.216 "claimed": true, 00:07:09.216 "claim_type": "exclusive_write", 00:07:09.216 "zoned": false, 00:07:09.216 "supported_io_types": { 00:07:09.216 "read": true, 00:07:09.216 "write": true, 00:07:09.216 "unmap": true, 00:07:09.216 "flush": true, 00:07:09.216 "reset": true, 00:07:09.216 "nvme_admin": false, 00:07:09.216 "nvme_io": false, 00:07:09.216 "nvme_io_md": false, 00:07:09.216 "write_zeroes": true, 00:07:09.216 "zcopy": true, 00:07:09.216 "get_zone_info": false, 00:07:09.216 "zone_management": false, 00:07:09.216 "zone_append": false, 00:07:09.216 "compare": false, 00:07:09.216 "compare_and_write": false, 00:07:09.216 "abort": true, 00:07:09.216 "seek_hole": false, 00:07:09.216 "seek_data": false, 00:07:09.216 "copy": true, 00:07:09.216 "nvme_iov_md": false 00:07:09.216 }, 00:07:09.216 "memory_domains": [ 00:07:09.216 { 00:07:09.216 "dma_device_id": "system", 00:07:09.216 "dma_device_type": 1 00:07:09.216 }, 00:07:09.216 { 00:07:09.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.216 "dma_device_type": 2 00:07:09.216 } 00:07:09.216 ], 00:07:09.216 "driver_specific": {} 00:07:09.216 } 00:07:09.216 ]' 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:09.216 11:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.593 11:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.593 11:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.593 11:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.593 11:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:10.593 11:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.126 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.126 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:13.127 11:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:13.127 11:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 ************************************ 00:07:14.060 START TEST filesystem_ext4 00:07:14.060 ************************************ 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:14.060 11:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:14.060 mke2fs 1.46.5 (30-Dec-2021) 00:07:14.060 Discarding device blocks: 0/522240 done 00:07:14.060 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:14.060 Filesystem UUID: 09c2e436-f80b-4725-8643-931bac043922 00:07:14.060 Superblock backups stored on blocks: 00:07:14.060 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:14.060 00:07:14.060 Allocating group tables: 0/64 done 00:07:14.060 Writing inode tables: 0/64 done 00:07:14.627 Creating journal (8192 blocks): done 00:07:15.561 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:15.561 00:07:15.561 11:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:15.561 11:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:16.129 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2620234 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.388 00:07:16.388 real 0m2.245s 00:07:16.388 user 0m0.030s 00:07:16.388 sys 0m0.056s 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:16.388 ************************************ 00:07:16.388 END TEST filesystem_ext4 00:07:16.388 ************************************ 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.388 ************************************ 00:07:16.388 START TEST filesystem_btrfs 00:07:16.388 ************************************ 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:16.388 11:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:16.646 btrfs-progs v6.6.2 00:07:16.646 See https://btrfs.readthedocs.io for more information. 00:07:16.646 00:07:16.646 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:16.646 NOTE: several default settings have changed in version 5.15, please make sure 00:07:16.646 this does not affect your deployments: 00:07:16.646 - DUP for metadata (-m dup) 00:07:16.646 - enabled no-holes (-O no-holes) 00:07:16.646 - enabled free-space-tree (-R free-space-tree) 00:07:16.646 00:07:16.646 Label: (null) 00:07:16.646 UUID: 04a2aa87-b93b-4246-94f0-66f55ff041c1 00:07:16.646 Node size: 16384 00:07:16.646 Sector size: 4096 00:07:16.646 Filesystem size: 510.00MiB 00:07:16.646 Block group profiles: 00:07:16.646 Data: single 8.00MiB 00:07:16.646 Metadata: DUP 32.00MiB 00:07:16.646 System: DUP 8.00MiB 00:07:16.646 SSD detected: yes 00:07:16.646 Zoned device: no 00:07:16.646 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:16.646 Runtime features: free-space-tree 00:07:16.646 Checksum: crc32c 00:07:16.646 Number of devices: 1 00:07:16.646 Devices: 00:07:16.646 ID SIZE PATH 00:07:16.646 1 510.00MiB /dev/nvme0n1p1 00:07:16.646 00:07:16.646 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:16.646 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.211 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.211 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2620234 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.470 00:07:17.470 real 0m1.035s 00:07:17.470 user 0m0.021s 00:07:17.470 sys 0m0.131s 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 END TEST filesystem_btrfs 00:07:17.470 ************************************ 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 START TEST filesystem_xfs 00:07:17.470 ************************************ 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:17.470 11:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:17.470 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:17.470 = sectsz=512 attr=2, projid32bit=1 00:07:17.470 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:17.470 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:17.470 data = bsize=4096 blocks=130560, imaxpct=25 00:07:17.470 = sunit=0 swidth=0 blks 00:07:17.470 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:17.470 log =internal log bsize=4096 blocks=16384, version=2 00:07:17.470 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:17.470 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:18.844 Discarding blocks...Done. 00:07:18.844 11:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:18.844 11:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.399 00:07:21.399 real 0m3.610s 00:07:21.399 user 0m0.020s 00:07:21.399 sys 0m0.074s 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.399 ************************************ 00:07:21.399 END TEST filesystem_xfs 00:07:21.399 ************************************ 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2620234 ']' 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2620234' 00:07:21.399 killing process with pid 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2620234 00:07:21.399 11:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2620234 00:07:21.657 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:21.657 00:07:21.657 real 0m13.743s 00:07:21.657 user 0m53.778s 00:07:21.657 sys 0m1.384s 00:07:21.657 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.657 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.657 ************************************ 00:07:21.657 END TEST nvmf_filesystem_no_in_capsule 00:07:21.657 ************************************ 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.916 ************************************ 00:07:21.916 START TEST nvmf_filesystem_in_capsule 00:07:21.916 ************************************ 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2622854 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2622854 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2622854 ']' 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.916 11:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.916 [2024-07-15 11:22:56.222264] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:07:21.916 [2024-07-15 11:22:56.222318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.916 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.916 [2024-07-15 11:22:56.307342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.174 [2024-07-15 11:22:56.398922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.174 [2024-07-15 11:22:56.398966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.174 [2024-07-15 11:22:56.398977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.174 [2024-07-15 11:22:56.398986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.174 [2024-07-15 11:22:56.398994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.174 [2024-07-15 11:22:56.399048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.174 [2024-07-15 11:22:56.399161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.174 [2024-07-15 11:22:56.399290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.174 [2024-07-15 11:22:56.399292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 [2024-07-15 11:22:57.132063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.741 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 Malloc1 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 [2024-07-15 11:22:57.295290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:23.000 { 00:07:23.000 "name": "Malloc1", 00:07:23.000 "aliases": [ 00:07:23.000 "76dfe516-2991-498e-ac1b-954d03525436" 00:07:23.000 ], 00:07:23.000 "product_name": "Malloc disk", 00:07:23.000 "block_size": 512, 00:07:23.000 "num_blocks": 1048576, 00:07:23.000 "uuid": "76dfe516-2991-498e-ac1b-954d03525436", 00:07:23.000 "assigned_rate_limits": { 00:07:23.000 "rw_ios_per_sec": 0, 00:07:23.000 "rw_mbytes_per_sec": 0, 00:07:23.000 "r_mbytes_per_sec": 0, 00:07:23.000 "w_mbytes_per_sec": 0 00:07:23.000 }, 00:07:23.000 "claimed": true, 00:07:23.000 "claim_type": "exclusive_write", 00:07:23.000 "zoned": false, 00:07:23.000 "supported_io_types": { 00:07:23.000 "read": true, 00:07:23.000 "write": true, 00:07:23.000 "unmap": true, 00:07:23.000 "flush": true, 00:07:23.000 "reset": true, 00:07:23.000 "nvme_admin": false, 00:07:23.000 "nvme_io": false, 00:07:23.000 "nvme_io_md": false, 00:07:23.000 "write_zeroes": true, 00:07:23.000 "zcopy": true, 00:07:23.000 "get_zone_info": false, 00:07:23.000 "zone_management": false, 00:07:23.000 "zone_append": false, 00:07:23.000 "compare": false, 00:07:23.000 "compare_and_write": false, 00:07:23.000 "abort": true, 00:07:23.000 "seek_hole": false, 00:07:23.000 "seek_data": false, 00:07:23.000 "copy": true, 00:07:23.000 "nvme_iov_md": false 00:07:23.000 }, 00:07:23.000 "memory_domains": [ 00:07:23.000 { 00:07:23.000 "dma_device_id": "system", 00:07:23.000 "dma_device_type": 1 00:07:23.000 }, 00:07:23.000 { 00:07:23.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.000 "dma_device_type": 2 00:07:23.000 } 00:07:23.000 ], 00:07:23.000 "driver_specific": {} 00:07:23.000 } 00:07:23.000 ]' 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.000 11:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.378 11:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.378 11:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:24.378 11:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.378 11:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:24.378 11:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:26.288 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.547 11:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:27.113 11:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.049 ************************************ 00:07:28.049 START TEST filesystem_in_capsule_ext4 00:07:28.049 ************************************ 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:28.049 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.049 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.307 Discarding device blocks: 0/522240 done 00:07:28.307 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.307 Filesystem UUID: 996be411-50eb-4f00-980b-b5ba4ed226fb 00:07:28.307 Superblock backups stored on blocks: 00:07:28.307 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.307 00:07:28.307 Allocating group tables: 0/64 done 00:07:28.307 Writing inode tables: 0/64 done 00:07:28.307 Creating journal (8192 blocks): done 00:07:28.565 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.565 00:07:28.565 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:28.565 11:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2622854 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.823 00:07:28.823 real 0m0.695s 00:07:28.823 user 0m0.022s 00:07:28.823 sys 0m0.068s 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.823 ************************************ 00:07:28.823 END TEST filesystem_in_capsule_ext4 00:07:28.823 ************************************ 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:28.823 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.824 ************************************ 00:07:28.824 START TEST filesystem_in_capsule_btrfs 00:07:28.824 ************************************ 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:28.824 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.081 btrfs-progs v6.6.2 00:07:29.081 See https://btrfs.readthedocs.io for more information. 00:07:29.081 00:07:29.081 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.081 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.081 this does not affect your deployments: 00:07:29.081 - DUP for metadata (-m dup) 00:07:29.081 - enabled no-holes (-O no-holes) 00:07:29.081 - enabled free-space-tree (-R free-space-tree) 00:07:29.081 00:07:29.081 Label: (null) 00:07:29.081 UUID: b3736532-86a4-4c96-822b-b4b58a5b5bc5 00:07:29.081 Node size: 16384 00:07:29.081 Sector size: 4096 00:07:29.081 Filesystem size: 510.00MiB 00:07:29.081 Block group profiles: 00:07:29.081 Data: single 8.00MiB 00:07:29.081 Metadata: DUP 32.00MiB 00:07:29.081 System: DUP 8.00MiB 00:07:29.081 SSD detected: yes 00:07:29.081 Zoned device: no 00:07:29.081 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.081 Runtime features: free-space-tree 00:07:29.081 Checksum: crc32c 00:07:29.081 Number of devices: 1 00:07:29.081 Devices: 00:07:29.081 ID SIZE PATH 00:07:29.081 1 510.00MiB /dev/nvme0n1p1 00:07:29.081 00:07:29.081 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:29.081 11:23:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.014 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.014 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2622854 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.015 00:07:30.015 real 0m0.970s 00:07:30.015 user 0m0.021s 00:07:30.015 sys 0m0.130s 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.015 ************************************ 00:07:30.015 END TEST filesystem_in_capsule_btrfs 00:07:30.015 ************************************ 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.015 ************************************ 00:07:30.015 START TEST filesystem_in_capsule_xfs 00:07:30.015 ************************************ 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:30.015 11:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.015 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.015 = sectsz=512 attr=2, projid32bit=1 00:07:30.015 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.015 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.015 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.015 = sunit=0 swidth=0 blks 00:07:30.015 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.015 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.015 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.015 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.950 Discarding blocks...Done. 00:07:30.950 11:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:30.950 11:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2622854 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.856 00:07:32.856 real 0m2.799s 00:07:32.856 user 0m0.024s 00:07:32.856 sys 0m0.071s 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:32.856 ************************************ 00:07:32.856 END TEST filesystem_in_capsule_xfs 00:07:32.856 ************************************ 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:32.856 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2622854 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2622854 ']' 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2622854 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2622854 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2622854' 00:07:33.115 killing process with pid 2622854 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2622854 00:07:33.115 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2622854 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.374 00:07:33.374 real 0m11.628s 00:07:33.374 user 0m45.349s 00:07:33.374 sys 0m1.327s 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.374 ************************************ 00:07:33.374 END TEST nvmf_filesystem_in_capsule 00:07:33.374 ************************************ 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.374 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.374 rmmod nvme_tcp 00:07:33.634 rmmod nvme_fabrics 00:07:33.634 rmmod nvme_keyring 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.634 11:23:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.541 11:23:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.541 00:07:35.541 real 0m33.851s 00:07:35.541 user 1m40.935s 00:07:35.541 sys 0m7.376s 00:07:35.541 11:23:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.541 11:23:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.541 ************************************ 00:07:35.541 END TEST nvmf_filesystem 00:07:35.541 ************************************ 00:07:35.541 11:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.541 11:23:09 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.541 11:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.541 11:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.541 11:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.801 ************************************ 00:07:35.801 START TEST nvmf_target_discovery 00:07:35.801 ************************************ 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.801 * Looking for test storage... 00:07:35.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.801 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.802 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.802 11:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.802 11:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:42.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:42.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:42.436 Found net devices under 0000:af:00.0: cvl_0_0 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.436 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:42.437 Found net devices under 0000:af:00.1: cvl_0_1 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.437 11:23:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:42.437 00:07:42.437 --- 10.0.0.2 ping statistics --- 00:07:42.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.437 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:42.437 00:07:42.437 --- 10.0.0.1 ping statistics --- 00:07:42.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.437 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2628886 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2628886 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2628886 ']' 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.437 11:23:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.437 [2024-07-15 11:23:16.237529] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:07:42.437 [2024-07-15 11:23:16.237583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.437 [2024-07-15 11:23:16.329139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.437 [2024-07-15 11:23:16.419084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.437 [2024-07-15 11:23:16.419127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.437 [2024-07-15 11:23:16.419137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.437 [2024-07-15 11:23:16.419146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.437 [2024-07-15 11:23:16.419154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.437 [2024-07-15 11:23:16.419212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.437 [2024-07-15 11:23:16.419325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.437 [2024-07-15 11:23:16.419355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.437 [2024-07-15 11:23:16.419355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 [2024-07-15 11:23:17.233480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.006 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.006 Null1 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 [2024-07-15 11:23:17.285810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 Null2 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 Null3 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 Null4 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.007 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:43.266 00:07:43.266 Discovery Log Number of Records 6, Generation counter 6 00:07:43.266 =====Discovery Log Entry 0====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: current discovery subsystem 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4420 00:07:43.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: explicit discovery connections, duplicate discovery information 00:07:43.266 sectype: none 00:07:43.266 =====Discovery Log Entry 1====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: nvme subsystem 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4420 00:07:43.266 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: none 00:07:43.266 sectype: none 00:07:43.266 =====Discovery Log Entry 2====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: nvme subsystem 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4420 00:07:43.266 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: none 00:07:43.266 sectype: none 00:07:43.266 =====Discovery Log Entry 3====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: nvme subsystem 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4420 00:07:43.266 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: none 00:07:43.266 sectype: none 00:07:43.266 =====Discovery Log Entry 4====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: nvme subsystem 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4420 00:07:43.266 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: none 00:07:43.266 sectype: none 00:07:43.266 =====Discovery Log Entry 5====== 00:07:43.266 trtype: tcp 00:07:43.266 adrfam: ipv4 00:07:43.266 subtype: discovery subsystem referral 00:07:43.266 treq: not required 00:07:43.266 portid: 0 00:07:43.266 trsvcid: 4430 00:07:43.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.266 traddr: 10.0.0.2 00:07:43.266 eflags: none 00:07:43.266 sectype: none 00:07:43.266 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:43.266 Perform nvmf subsystem discovery via RPC 00:07:43.266 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:43.266 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.266 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.266 [ 00:07:43.266 { 00:07:43.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:43.266 "subtype": "Discovery", 00:07:43.266 "listen_addresses": [ 00:07:43.266 { 00:07:43.266 "trtype": "TCP", 00:07:43.266 "adrfam": "IPv4", 00:07:43.266 "traddr": "10.0.0.2", 00:07:43.266 "trsvcid": "4420" 00:07:43.266 } 00:07:43.266 ], 00:07:43.266 "allow_any_host": true, 00:07:43.266 "hosts": [] 00:07:43.266 }, 00:07:43.266 { 00:07:43.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.266 "subtype": "NVMe", 00:07:43.266 "listen_addresses": [ 00:07:43.266 { 00:07:43.266 "trtype": "TCP", 00:07:43.266 "adrfam": "IPv4", 00:07:43.266 "traddr": "10.0.0.2", 00:07:43.266 "trsvcid": "4420" 00:07:43.266 } 00:07:43.266 ], 00:07:43.266 "allow_any_host": true, 00:07:43.266 "hosts": [], 00:07:43.266 "serial_number": "SPDK00000000000001", 00:07:43.266 "model_number": "SPDK bdev Controller", 00:07:43.266 "max_namespaces": 32, 00:07:43.266 "min_cntlid": 1, 00:07:43.266 "max_cntlid": 65519, 00:07:43.266 "namespaces": [ 00:07:43.266 { 00:07:43.266 "nsid": 1, 00:07:43.266 "bdev_name": "Null1", 00:07:43.266 "name": "Null1", 00:07:43.266 "nguid": "4FE1DAEE74304A6F8A04945CB60F49DF", 00:07:43.266 "uuid": "4fe1daee-7430-4a6f-8a04-945cb60f49df" 00:07:43.266 } 00:07:43.266 ] 00:07:43.266 }, 00:07:43.266 { 00:07:43.266 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:43.266 "subtype": "NVMe", 00:07:43.266 "listen_addresses": [ 00:07:43.266 { 00:07:43.266 "trtype": "TCP", 00:07:43.266 "adrfam": "IPv4", 00:07:43.266 "traddr": "10.0.0.2", 00:07:43.266 "trsvcid": "4420" 00:07:43.266 } 00:07:43.267 ], 00:07:43.267 "allow_any_host": true, 00:07:43.267 "hosts": [], 00:07:43.267 "serial_number": "SPDK00000000000002", 00:07:43.267 "model_number": "SPDK bdev Controller", 00:07:43.267 "max_namespaces": 32, 00:07:43.267 "min_cntlid": 1, 00:07:43.267 "max_cntlid": 65519, 00:07:43.267 "namespaces": [ 00:07:43.267 { 00:07:43.267 "nsid": 1, 00:07:43.267 "bdev_name": "Null2", 00:07:43.267 "name": "Null2", 00:07:43.267 "nguid": "2D53609D22B841ACAD8EBC479F7B7093", 00:07:43.267 "uuid": "2d53609d-22b8-41ac-ad8e-bc479f7b7093" 00:07:43.267 } 00:07:43.267 ] 00:07:43.267 }, 00:07:43.267 { 00:07:43.267 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:43.267 "subtype": "NVMe", 00:07:43.267 "listen_addresses": [ 00:07:43.267 { 00:07:43.267 "trtype": "TCP", 00:07:43.267 "adrfam": "IPv4", 00:07:43.267 "traddr": "10.0.0.2", 00:07:43.267 "trsvcid": "4420" 00:07:43.267 } 00:07:43.267 ], 00:07:43.267 "allow_any_host": true, 00:07:43.267 "hosts": [], 00:07:43.267 "serial_number": "SPDK00000000000003", 00:07:43.267 "model_number": "SPDK bdev Controller", 00:07:43.267 "max_namespaces": 32, 00:07:43.267 "min_cntlid": 1, 00:07:43.267 "max_cntlid": 65519, 00:07:43.267 "namespaces": [ 00:07:43.267 { 00:07:43.267 "nsid": 1, 00:07:43.267 "bdev_name": "Null3", 00:07:43.267 "name": "Null3", 00:07:43.267 "nguid": "3390473A5480478DB4AD1991BE6E6079", 00:07:43.267 "uuid": "3390473a-5480-478d-b4ad-1991be6e6079" 00:07:43.267 } 00:07:43.267 ] 00:07:43.267 }, 00:07:43.267 { 00:07:43.267 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:43.267 "subtype": "NVMe", 00:07:43.267 "listen_addresses": [ 00:07:43.267 { 00:07:43.267 "trtype": "TCP", 00:07:43.267 "adrfam": "IPv4", 00:07:43.267 "traddr": "10.0.0.2", 00:07:43.267 "trsvcid": "4420" 00:07:43.267 } 00:07:43.267 ], 00:07:43.267 "allow_any_host": true, 00:07:43.267 "hosts": [], 00:07:43.267 "serial_number": "SPDK00000000000004", 00:07:43.267 "model_number": "SPDK bdev Controller", 00:07:43.267 "max_namespaces": 32, 00:07:43.267 "min_cntlid": 1, 00:07:43.267 "max_cntlid": 65519, 00:07:43.267 "namespaces": [ 00:07:43.267 { 00:07:43.267 "nsid": 1, 00:07:43.267 "bdev_name": "Null4", 00:07:43.267 "name": "Null4", 00:07:43.267 "nguid": "4236A4A231A64FE0806F8353B38D0DD4", 00:07:43.267 "uuid": "4236a4a2-31a6-4fe0-806f-8353b38d0dd4" 00:07:43.267 } 00:07:43.267 ] 00:07:43.267 } 00:07:43.267 ] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.267 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.267 rmmod nvme_tcp 00:07:43.267 rmmod nvme_fabrics 00:07:43.267 rmmod nvme_keyring 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2628886 ']' 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2628886 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2628886 ']' 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2628886 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2628886 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2628886' 00:07:43.526 killing process with pid 2628886 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2628886 00:07:43.526 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2628886 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.785 11:23:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.692 11:23:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.692 00:07:45.692 real 0m10.024s 00:07:45.692 user 0m8.233s 00:07:45.692 sys 0m4.883s 00:07:45.692 11:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.692 11:23:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.692 ************************************ 00:07:45.692 END TEST nvmf_target_discovery 00:07:45.692 ************************************ 00:07:45.692 11:23:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.692 11:23:20 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.692 11:23:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.692 11:23:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.692 11:23:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.692 ************************************ 00:07:45.692 START TEST nvmf_referrals 00:07:45.692 ************************************ 00:07:45.692 11:23:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.952 * Looking for test storage... 00:07:45.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.952 11:23:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.525 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.525 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.525 11:23:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.525 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.525 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.525 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:07:52.525 00:07:52.525 --- 10.0.0.2 ping statistics --- 00:07:52.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.525 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:52.525 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:07:52.525 00:07:52.525 --- 10.0.0.1 ping statistics --- 00:07:52.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.525 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:52.525 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2632845 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2632845 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2632845 ']' 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.526 11:23:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 [2024-07-15 11:23:26.179696] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:07:52.526 [2024-07-15 11:23:26.179752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.526 [2024-07-15 11:23:26.266882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.526 [2024-07-15 11:23:26.357892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.526 [2024-07-15 11:23:26.357933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.526 [2024-07-15 11:23:26.357944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.526 [2024-07-15 11:23:26.357953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.526 [2024-07-15 11:23:26.357960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.526 [2024-07-15 11:23:26.358016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.526 [2024-07-15 11:23:26.358127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.526 [2024-07-15 11:23:26.358241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.526 [2024-07-15 11:23:26.358241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 [2024-07-15 11:23:27.082751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 [2024-07-15 11:23:27.102989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.785 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.044 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.304 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.564 11:23:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:53.823 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:54.083 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.340 rmmod nvme_tcp 00:07:54.340 rmmod nvme_fabrics 00:07:54.340 rmmod nvme_keyring 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2632845 ']' 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2632845 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2632845 ']' 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2632845 00:07:54.340 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2632845 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2632845' 00:07:54.599 killing process with pid 2632845 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2632845 00:07:54.599 11:23:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2632845 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.859 11:23:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.764 11:23:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.764 00:07:56.764 real 0m11.007s 00:07:56.764 user 0m13.322s 00:07:56.764 sys 0m5.138s 00:07:56.764 11:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.764 11:23:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.764 ************************************ 00:07:56.764 END TEST nvmf_referrals 00:07:56.764 ************************************ 00:07:56.764 11:23:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:56.764 11:23:31 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.764 11:23:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.764 11:23:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.764 11:23:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.764 ************************************ 00:07:56.764 START TEST nvmf_connect_disconnect 00:07:56.764 ************************************ 00:07:56.765 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.024 * Looking for test storage... 00:07:57.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.024 11:23:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.614 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.615 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.615 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.615 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.615 11:23:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:08:03.615 00:08:03.615 --- 10.0.0.2 ping statistics --- 00:08:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.615 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:08:03.615 00:08:03.615 --- 10.0.0.1 ping statistics --- 00:08:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.615 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2637012 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2637012 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2637012 ']' 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.615 11:23:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 [2024-07-15 11:23:37.288446] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:08:03.616 [2024-07-15 11:23:37.288503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.616 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.616 [2024-07-15 11:23:37.376083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.616 [2024-07-15 11:23:37.466399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.616 [2024-07-15 11:23:37.466436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.616 [2024-07-15 11:23:37.466447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.616 [2024-07-15 11:23:37.466456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.616 [2024-07-15 11:23:37.466463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.616 [2024-07-15 11:23:37.466510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.616 [2024-07-15 11:23:37.466623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.616 [2024-07-15 11:23:37.466709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.616 [2024-07-15 11:23:37.466709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.875 [2024-07-15 11:23:38.197903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.875 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.876 [2024-07-15 11:23:38.258025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:03.876 11:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:08.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.190 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.190 rmmod nvme_tcp 00:08:21.190 rmmod nvme_fabrics 00:08:21.191 rmmod nvme_keyring 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2637012 ']' 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2637012 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2637012 ']' 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2637012 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.191 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2637012 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2637012' 00:08:21.449 killing process with pid 2637012 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2637012 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2637012 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.449 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.708 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.708 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.708 11:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.614 11:23:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.614 00:08:23.614 real 0m26.768s 00:08:23.614 user 1m14.776s 00:08:23.614 sys 0m5.781s 00:08:23.614 11:23:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.614 11:23:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.614 ************************************ 00:08:23.614 END TEST nvmf_connect_disconnect 00:08:23.614 ************************************ 00:08:23.614 11:23:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.614 11:23:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:23.614 11:23:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.614 11:23:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.614 11:23:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.614 ************************************ 00:08:23.614 START TEST nvmf_multitarget 00:08:23.614 ************************************ 00:08:23.614 11:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:23.873 * Looking for test storage... 00:08:23.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.874 11:23:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.498 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:30.499 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:30.499 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:30.499 Found net devices under 0000:af:00.0: cvl_0_0 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:30.499 Found net devices under 0000:af:00.1: cvl_0_1 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:30.499 00:08:30.499 --- 10.0.0.2 ping statistics --- 00:08:30.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.499 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:30.499 11:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:08:30.499 00:08:30.499 --- 10.0.0.1 ping statistics --- 00:08:30.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.499 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2644220 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2644220 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2644220 ']' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:30.499 [2024-07-15 11:24:04.101532] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:08:30.499 [2024-07-15 11:24:04.101587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.499 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.499 [2024-07-15 11:24:04.193073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.499 [2024-07-15 11:24:04.289054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.499 [2024-07-15 11:24:04.289095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.499 [2024-07-15 11:24:04.289105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.499 [2024-07-15 11:24:04.289114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.499 [2024-07-15 11:24:04.289121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.499 [2024-07-15 11:24:04.289174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.499 [2024-07-15 11:24:04.289211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.499 [2024-07-15 11:24:04.289298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.499 [2024-07-15 11:24:04.289300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:30.499 "nvmf_tgt_1" 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:30.499 "nvmf_tgt_2" 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:30.499 11:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:30.758 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:30.758 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:30.758 true 00:08:30.758 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:31.017 true 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.017 rmmod nvme_tcp 00:08:31.017 rmmod nvme_fabrics 00:08:31.017 rmmod nvme_keyring 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2644220 ']' 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2644220 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2644220 ']' 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2644220 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.017 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2644220 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2644220' 00:08:31.277 killing process with pid 2644220 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2644220 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2644220 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.277 11:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.814 11:24:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.814 00:08:33.814 real 0m9.746s 00:08:33.814 user 0m8.298s 00:08:33.814 sys 0m4.915s 00:08:33.814 11:24:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.814 11:24:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.814 ************************************ 00:08:33.814 END TEST nvmf_multitarget 00:08:33.814 ************************************ 00:08:33.814 11:24:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.814 11:24:07 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:33.814 11:24:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.814 11:24:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.814 11:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.814 ************************************ 00:08:33.814 START TEST nvmf_rpc 00:08:33.814 ************************************ 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:33.814 * Looking for test storage... 00:08:33.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.814 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.815 11:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.815 11:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.815 11:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.815 11:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.815 11:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:40.386 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:40.386 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:40.386 Found net devices under 0000:af:00.0: cvl_0_0 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:40.386 Found net devices under 0000:af:00.1: cvl_0_1 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:40.386 00:08:40.386 --- 10.0.0.2 ping statistics --- 00:08:40.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.386 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:08:40.386 00:08:40.386 --- 10.0.0.1 ping statistics --- 00:08:40.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.386 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2648523 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2648523 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2648523 ']' 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.386 11:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.386 [2024-07-15 11:24:13.981593] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:08:40.386 [2024-07-15 11:24:13.981650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.386 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.386 [2024-07-15 11:24:14.069181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.386 [2024-07-15 11:24:14.160195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.386 [2024-07-15 11:24:14.160237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.386 [2024-07-15 11:24:14.160247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.386 [2024-07-15 11:24:14.160263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.386 [2024-07-15 11:24:14.160271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.386 [2024-07-15 11:24:14.160325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.386 [2024-07-15 11:24:14.160438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.386 [2024-07-15 11:24:14.160550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.386 [2024-07-15 11:24:14.160550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:40.645 "tick_rate": 2200000000, 00:08:40.645 "poll_groups": [ 00:08:40.645 { 00:08:40.645 "name": "nvmf_tgt_poll_group_000", 00:08:40.645 "admin_qpairs": 0, 00:08:40.645 "io_qpairs": 0, 00:08:40.645 "current_admin_qpairs": 0, 00:08:40.645 "current_io_qpairs": 0, 00:08:40.645 "pending_bdev_io": 0, 00:08:40.645 "completed_nvme_io": 0, 00:08:40.645 "transports": [] 00:08:40.645 }, 00:08:40.645 { 00:08:40.645 "name": "nvmf_tgt_poll_group_001", 00:08:40.645 "admin_qpairs": 0, 00:08:40.645 "io_qpairs": 0, 00:08:40.645 "current_admin_qpairs": 0, 00:08:40.645 "current_io_qpairs": 0, 00:08:40.645 "pending_bdev_io": 0, 00:08:40.645 "completed_nvme_io": 0, 00:08:40.645 "transports": [] 00:08:40.645 }, 00:08:40.645 { 00:08:40.645 "name": "nvmf_tgt_poll_group_002", 00:08:40.645 "admin_qpairs": 0, 00:08:40.645 "io_qpairs": 0, 00:08:40.645 "current_admin_qpairs": 0, 00:08:40.645 "current_io_qpairs": 0, 00:08:40.645 "pending_bdev_io": 0, 00:08:40.645 "completed_nvme_io": 0, 00:08:40.645 "transports": [] 00:08:40.645 }, 00:08:40.645 { 00:08:40.645 "name": "nvmf_tgt_poll_group_003", 00:08:40.645 "admin_qpairs": 0, 00:08:40.645 "io_qpairs": 0, 00:08:40.645 "current_admin_qpairs": 0, 00:08:40.645 "current_io_qpairs": 0, 00:08:40.645 "pending_bdev_io": 0, 00:08:40.645 "completed_nvme_io": 0, 00:08:40.645 "transports": [] 00:08:40.645 } 00:08:40.645 ] 00:08:40.645 }' 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:40.645 11:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.645 [2024-07-15 11:24:15.086062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.645 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:40.904 "tick_rate": 2200000000, 00:08:40.904 "poll_groups": [ 00:08:40.904 { 00:08:40.904 "name": "nvmf_tgt_poll_group_000", 00:08:40.904 "admin_qpairs": 0, 00:08:40.904 "io_qpairs": 0, 00:08:40.904 "current_admin_qpairs": 0, 00:08:40.904 "current_io_qpairs": 0, 00:08:40.904 "pending_bdev_io": 0, 00:08:40.904 "completed_nvme_io": 0, 00:08:40.904 "transports": [ 00:08:40.904 { 00:08:40.904 "trtype": "TCP" 00:08:40.904 } 00:08:40.904 ] 00:08:40.904 }, 00:08:40.904 { 00:08:40.904 "name": "nvmf_tgt_poll_group_001", 00:08:40.904 "admin_qpairs": 0, 00:08:40.904 "io_qpairs": 0, 00:08:40.904 "current_admin_qpairs": 0, 00:08:40.904 "current_io_qpairs": 0, 00:08:40.904 "pending_bdev_io": 0, 00:08:40.904 "completed_nvme_io": 0, 00:08:40.904 "transports": [ 00:08:40.904 { 00:08:40.904 "trtype": "TCP" 00:08:40.904 } 00:08:40.904 ] 00:08:40.904 }, 00:08:40.904 { 00:08:40.904 "name": "nvmf_tgt_poll_group_002", 00:08:40.904 "admin_qpairs": 0, 00:08:40.904 "io_qpairs": 0, 00:08:40.904 "current_admin_qpairs": 0, 00:08:40.904 "current_io_qpairs": 0, 00:08:40.904 "pending_bdev_io": 0, 00:08:40.904 "completed_nvme_io": 0, 00:08:40.904 "transports": [ 00:08:40.904 { 00:08:40.904 "trtype": "TCP" 00:08:40.904 } 00:08:40.904 ] 00:08:40.904 }, 00:08:40.904 { 00:08:40.904 "name": "nvmf_tgt_poll_group_003", 00:08:40.904 "admin_qpairs": 0, 00:08:40.904 "io_qpairs": 0, 00:08:40.904 "current_admin_qpairs": 0, 00:08:40.904 "current_io_qpairs": 0, 00:08:40.904 "pending_bdev_io": 0, 00:08:40.904 "completed_nvme_io": 0, 00:08:40.904 "transports": [ 00:08:40.904 { 00:08:40.904 "trtype": "TCP" 00:08:40.904 } 00:08:40.904 ] 00:08:40.904 } 00:08:40.904 ] 00:08:40.904 }' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.904 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.904 Malloc1 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 [2024-07-15 11:24:15.270652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:40.905 [2024-07-15 11:24:15.299300] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:40.905 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:40.905 could not add new controller: failed to write to nvme-fabrics device 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.905 11:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.279 11:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.280 11:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.280 11:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.280 11:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:42.280 11:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:44.179 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.437 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.438 [2024-07-15 11:24:18.773797] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:44.438 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:44.438 could not add new controller: failed to write to nvme-fabrics device 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.438 11:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.814 11:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.814 11:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.814 11:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.814 11:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.814 11:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.716 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.716 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.716 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 [2024-07-15 11:24:22.319392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.975 11:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.351 11:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.351 11:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:49.351 11:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.351 11:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:49.351 11:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:51.254 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 [2024-07-15 11:24:25.909716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.514 11:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:52.894 11:24:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.894 11:24:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.894 11:24:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.894 11:24:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.894 11:24:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:54.797 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 [2024-07-15 11:24:29.395952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.056 11:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.434 11:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.434 11:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:56.434 11:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.434 11:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:56.434 11:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:58.339 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 [2024-07-15 11:24:32.891508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.598 11:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.975 11:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:59.975 11:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:59.975 11:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.975 11:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:59.975 11:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.880 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 [2024-07-15 11:24:36.363662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.139 11:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.517 11:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.517 11:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.517 11:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.517 11:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.517 11:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.422 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 [2024-07-15 11:24:39.926250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 [2024-07-15 11:24:39.974413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 11:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.681 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 [2024-07-15 11:24:40.026632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 [2024-07-15 11:24:40.078859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 [2024-07-15 11:24:40.127029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.682 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.941 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:05.941 "tick_rate": 2200000000, 00:09:05.941 "poll_groups": [ 00:09:05.941 { 00:09:05.941 "name": "nvmf_tgt_poll_group_000", 00:09:05.941 "admin_qpairs": 2, 00:09:05.941 "io_qpairs": 196, 00:09:05.941 "current_admin_qpairs": 0, 00:09:05.941 "current_io_qpairs": 0, 00:09:05.941 "pending_bdev_io": 0, 00:09:05.941 "completed_nvme_io": 248, 00:09:05.941 "transports": [ 00:09:05.941 { 00:09:05.941 "trtype": "TCP" 00:09:05.941 } 00:09:05.941 ] 00:09:05.941 }, 00:09:05.941 { 00:09:05.941 "name": "nvmf_tgt_poll_group_001", 00:09:05.941 "admin_qpairs": 2, 00:09:05.941 "io_qpairs": 196, 00:09:05.941 "current_admin_qpairs": 0, 00:09:05.941 "current_io_qpairs": 0, 00:09:05.941 "pending_bdev_io": 0, 00:09:05.942 "completed_nvme_io": 294, 00:09:05.942 "transports": [ 00:09:05.942 { 00:09:05.942 "trtype": "TCP" 00:09:05.942 } 00:09:05.942 ] 00:09:05.942 }, 00:09:05.942 { 00:09:05.942 "name": "nvmf_tgt_poll_group_002", 00:09:05.942 "admin_qpairs": 1, 00:09:05.942 "io_qpairs": 196, 00:09:05.942 "current_admin_qpairs": 0, 00:09:05.942 "current_io_qpairs": 0, 00:09:05.942 "pending_bdev_io": 0, 00:09:05.942 "completed_nvme_io": 344, 00:09:05.942 "transports": [ 00:09:05.942 { 00:09:05.942 "trtype": "TCP" 00:09:05.942 } 00:09:05.942 ] 00:09:05.942 }, 00:09:05.942 { 00:09:05.942 "name": "nvmf_tgt_poll_group_003", 00:09:05.942 "admin_qpairs": 2, 00:09:05.942 "io_qpairs": 196, 00:09:05.942 "current_admin_qpairs": 0, 00:09:05.942 "current_io_qpairs": 0, 00:09:05.942 "pending_bdev_io": 0, 00:09:05.942 "completed_nvme_io": 248, 00:09:05.942 "transports": [ 00:09:05.942 { 00:09:05.942 "trtype": "TCP" 00:09:05.942 } 00:09:05.942 ] 00:09:05.942 } 00:09:05.942 ] 00:09:05.942 }' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.942 rmmod nvme_tcp 00:09:05.942 rmmod nvme_fabrics 00:09:05.942 rmmod nvme_keyring 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2648523 ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2648523 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2648523 ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2648523 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2648523 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2648523' 00:09:05.942 killing process with pid 2648523 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2648523 00:09:05.942 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2648523 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.201 11:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.738 11:24:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.738 00:09:08.738 real 0m34.828s 00:09:08.738 user 1m47.009s 00:09:08.738 sys 0m6.472s 00:09:08.738 11:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.738 11:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.738 ************************************ 00:09:08.738 END TEST nvmf_rpc 00:09:08.738 ************************************ 00:09:08.738 11:24:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.738 11:24:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:08.738 11:24:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.738 11:24:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.739 11:24:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.739 ************************************ 00:09:08.739 START TEST nvmf_invalid 00:09:08.739 ************************************ 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:08.739 * Looking for test storage... 00:09:08.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.739 11:24:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.124 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.124 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.124 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.125 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.125 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.125 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:09:14.383 00:09:14.383 --- 10.0.0.2 ping statistics --- 00:09:14.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.383 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:09:14.383 00:09:14.383 --- 10.0.0.1 ping statistics --- 00:09:14.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.383 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2657144 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2657144 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2657144 ']' 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.383 11:24:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.383 [2024-07-15 11:24:48.714828] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:09:14.383 [2024-07-15 11:24:48.714882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.383 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.383 [2024-07-15 11:24:48.804809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.643 [2024-07-15 11:24:48.897442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.643 [2024-07-15 11:24:48.897484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.643 [2024-07-15 11:24:48.897494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.643 [2024-07-15 11:24:48.897503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.643 [2024-07-15 11:24:48.897510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.643 [2024-07-15 11:24:48.897560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.643 [2024-07-15 11:24:48.897602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.643 [2024-07-15 11:24:48.897711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.643 [2024-07-15 11:24:48.897713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:14.643 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18977 00:09:14.902 [2024-07-15 11:24:49.277701] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:14.902 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:14.902 { 00:09:14.902 "nqn": "nqn.2016-06.io.spdk:cnode18977", 00:09:14.902 "tgt_name": "foobar", 00:09:14.902 "method": "nvmf_create_subsystem", 00:09:14.902 "req_id": 1 00:09:14.902 } 00:09:14.902 Got JSON-RPC error response 00:09:14.902 response: 00:09:14.902 { 00:09:14.902 "code": -32603, 00:09:14.902 "message": "Unable to find target foobar" 00:09:14.902 }' 00:09:14.902 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:14.902 { 00:09:14.902 "nqn": "nqn.2016-06.io.spdk:cnode18977", 00:09:14.902 "tgt_name": "foobar", 00:09:14.902 "method": "nvmf_create_subsystem", 00:09:14.902 "req_id": 1 00:09:14.902 } 00:09:14.902 Got JSON-RPC error response 00:09:14.902 response: 00:09:14.902 { 00:09:14.902 "code": -32603, 00:09:14.902 "message": "Unable to find target foobar" 00:09:14.902 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:14.902 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:14.902 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18218 00:09:15.161 [2024-07-15 11:24:49.462483] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18218: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:15.161 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:15.161 { 00:09:15.161 "nqn": "nqn.2016-06.io.spdk:cnode18218", 00:09:15.161 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:15.161 "method": "nvmf_create_subsystem", 00:09:15.161 "req_id": 1 00:09:15.161 } 00:09:15.161 Got JSON-RPC error response 00:09:15.161 response: 00:09:15.161 { 00:09:15.161 "code": -32602, 00:09:15.161 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:15.161 }' 00:09:15.161 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:15.161 { 00:09:15.161 "nqn": "nqn.2016-06.io.spdk:cnode18218", 00:09:15.161 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:15.161 "method": "nvmf_create_subsystem", 00:09:15.161 "req_id": 1 00:09:15.161 } 00:09:15.161 Got JSON-RPC error response 00:09:15.161 response: 00:09:15.161 { 00:09:15.161 "code": -32602, 00:09:15.161 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:15.161 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:15.161 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:15.161 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13789 00:09:15.421 [2024-07-15 11:24:49.731484] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13789: invalid model number 'SPDK_Controller' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:15.421 { 00:09:15.421 "nqn": "nqn.2016-06.io.spdk:cnode13789", 00:09:15.421 "model_number": "SPDK_Controller\u001f", 00:09:15.421 "method": "nvmf_create_subsystem", 00:09:15.421 "req_id": 1 00:09:15.421 } 00:09:15.421 Got JSON-RPC error response 00:09:15.421 response: 00:09:15.421 { 00:09:15.421 "code": -32602, 00:09:15.421 "message": "Invalid MN SPDK_Controller\u001f" 00:09:15.421 }' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:15.421 { 00:09:15.421 "nqn": "nqn.2016-06.io.spdk:cnode13789", 00:09:15.421 "model_number": "SPDK_Controller\u001f", 00:09:15.421 "method": "nvmf_create_subsystem", 00:09:15.421 "req_id": 1 00:09:15.421 } 00:09:15.421 Got JSON-RPC error response 00:09:15.421 response: 00:09:15.421 { 00:09:15.421 "code": -32602, 00:09:15.421 "message": "Invalid MN SPDK_Controller\u001f" 00:09:15.421 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.421 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'XYgDxHC-v(OfC*cH`.+US' 00:09:15.680 11:24:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XYgDxHC-v(OfC*cH`.+US' nqn.2016-06.io.spdk:cnode28721 00:09:15.680 [2024-07-15 11:24:50.137136] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28721: invalid serial number 'XYgDxHC-v(OfC*cH`.+US' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:15.940 { 00:09:15.940 "nqn": "nqn.2016-06.io.spdk:cnode28721", 00:09:15.940 "serial_number": "XYgDxHC-v(OfC*cH`.+US", 00:09:15.940 "method": "nvmf_create_subsystem", 00:09:15.940 "req_id": 1 00:09:15.940 } 00:09:15.940 Got JSON-RPC error response 00:09:15.940 response: 00:09:15.940 { 00:09:15.940 "code": -32602, 00:09:15.940 "message": "Invalid SN XYgDxHC-v(OfC*cH`.+US" 00:09:15.940 }' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:15.940 { 00:09:15.940 "nqn": "nqn.2016-06.io.spdk:cnode28721", 00:09:15.940 "serial_number": "XYgDxHC-v(OfC*cH`.+US", 00:09:15.940 "method": "nvmf_create_subsystem", 00:09:15.940 "req_id": 1 00:09:15.940 } 00:09:15.940 Got JSON-RPC error response 00:09:15.940 response: 00:09:15.940 { 00:09:15.940 "code": -32602, 00:09:15.940 "message": "Invalid SN XYgDxHC-v(OfC*cH`.+US" 00:09:15.940 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.940 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.941 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll"jCf~' 00:09:16.200 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll"jCf~' nqn.2016-06.io.spdk:cnode17846 00:09:16.459 [2024-07-15 11:24:50.667175] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17846: invalid model number 'eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll"jCf~' 00:09:16.459 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:16.459 { 00:09:16.459 "nqn": "nqn.2016-06.io.spdk:cnode17846", 00:09:16.459 "model_number": "eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll\"jCf~", 00:09:16.459 "method": "nvmf_create_subsystem", 00:09:16.459 "req_id": 1 00:09:16.459 } 00:09:16.459 Got JSON-RPC error response 00:09:16.459 response: 00:09:16.459 { 00:09:16.459 "code": -32602, 00:09:16.459 "message": "Invalid MN eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll\"jCf~" 00:09:16.459 }' 00:09:16.459 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:16.459 { 00:09:16.459 "nqn": "nqn.2016-06.io.spdk:cnode17846", 00:09:16.459 "model_number": "eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll\"jCf~", 00:09:16.459 "method": "nvmf_create_subsystem", 00:09:16.459 "req_id": 1 00:09:16.459 } 00:09:16.459 Got JSON-RPC error response 00:09:16.459 response: 00:09:16.459 { 00:09:16.459 "code": -32602, 00:09:16.459 "message": "Invalid MN eL+@3w29p1SO+@!hfPczK|#[xOq&i|Q-*?Ll\"jCf~" 00:09:16.459 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:16.459 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:16.718 [2024-07-15 11:24:50.932316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.718 11:24:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:16.977 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:16.977 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:16.977 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:16.977 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:16.977 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:17.236 [2024-07-15 11:24:51.458374] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:17.236 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:17.236 { 00:09:17.236 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.236 "listen_address": { 00:09:17.236 "trtype": "tcp", 00:09:17.236 "traddr": "", 00:09:17.236 "trsvcid": "4421" 00:09:17.236 }, 00:09:17.236 "method": "nvmf_subsystem_remove_listener", 00:09:17.236 "req_id": 1 00:09:17.236 } 00:09:17.236 Got JSON-RPC error response 00:09:17.236 response: 00:09:17.236 { 00:09:17.236 "code": -32602, 00:09:17.236 "message": "Invalid parameters" 00:09:17.236 }' 00:09:17.236 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:17.236 { 00:09:17.236 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.236 "listen_address": { 00:09:17.236 "trtype": "tcp", 00:09:17.236 "traddr": "", 00:09:17.236 "trsvcid": "4421" 00:09:17.236 }, 00:09:17.236 "method": "nvmf_subsystem_remove_listener", 00:09:17.236 "req_id": 1 00:09:17.236 } 00:09:17.236 Got JSON-RPC error response 00:09:17.236 response: 00:09:17.236 { 00:09:17.236 "code": -32602, 00:09:17.236 "message": "Invalid parameters" 00:09:17.236 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:17.236 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7750 -i 0 00:09:17.495 [2024-07-15 11:24:51.723338] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7750: invalid cntlid range [0-65519] 00:09:17.495 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:17.495 { 00:09:17.495 "nqn": "nqn.2016-06.io.spdk:cnode7750", 00:09:17.495 "min_cntlid": 0, 00:09:17.495 "method": "nvmf_create_subsystem", 00:09:17.495 "req_id": 1 00:09:17.495 } 00:09:17.495 Got JSON-RPC error response 00:09:17.495 response: 00:09:17.495 { 00:09:17.495 "code": -32602, 00:09:17.495 "message": "Invalid cntlid range [0-65519]" 00:09:17.495 }' 00:09:17.495 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:17.495 { 00:09:17.495 "nqn": "nqn.2016-06.io.spdk:cnode7750", 00:09:17.495 "min_cntlid": 0, 00:09:17.495 "method": "nvmf_create_subsystem", 00:09:17.495 "req_id": 1 00:09:17.495 } 00:09:17.495 Got JSON-RPC error response 00:09:17.495 response: 00:09:17.495 { 00:09:17.495 "code": -32602, 00:09:17.495 "message": "Invalid cntlid range [0-65519]" 00:09:17.495 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:17.495 11:24:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9481 -i 65520 00:09:17.754 [2024-07-15 11:24:51.984365] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9481: invalid cntlid range [65520-65519] 00:09:17.754 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:17.754 { 00:09:17.754 "nqn": "nqn.2016-06.io.spdk:cnode9481", 00:09:17.754 "min_cntlid": 65520, 00:09:17.754 "method": "nvmf_create_subsystem", 00:09:17.754 "req_id": 1 00:09:17.754 } 00:09:17.754 Got JSON-RPC error response 00:09:17.754 response: 00:09:17.754 { 00:09:17.754 "code": -32602, 00:09:17.754 "message": "Invalid cntlid range [65520-65519]" 00:09:17.754 }' 00:09:17.754 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:17.754 { 00:09:17.754 "nqn": "nqn.2016-06.io.spdk:cnode9481", 00:09:17.754 "min_cntlid": 65520, 00:09:17.754 "method": "nvmf_create_subsystem", 00:09:17.754 "req_id": 1 00:09:17.754 } 00:09:17.754 Got JSON-RPC error response 00:09:17.754 response: 00:09:17.754 { 00:09:17.754 "code": -32602, 00:09:17.754 "message": "Invalid cntlid range [65520-65519]" 00:09:17.754 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:17.754 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30200 -I 0 00:09:18.013 [2024-07-15 11:24:52.245351] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30200: invalid cntlid range [1-0] 00:09:18.013 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:18.013 { 00:09:18.013 "nqn": "nqn.2016-06.io.spdk:cnode30200", 00:09:18.013 "max_cntlid": 0, 00:09:18.013 "method": "nvmf_create_subsystem", 00:09:18.013 "req_id": 1 00:09:18.013 } 00:09:18.013 Got JSON-RPC error response 00:09:18.013 response: 00:09:18.013 { 00:09:18.013 "code": -32602, 00:09:18.013 "message": "Invalid cntlid range [1-0]" 00:09:18.013 }' 00:09:18.013 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:18.013 { 00:09:18.013 "nqn": "nqn.2016-06.io.spdk:cnode30200", 00:09:18.013 "max_cntlid": 0, 00:09:18.013 "method": "nvmf_create_subsystem", 00:09:18.013 "req_id": 1 00:09:18.013 } 00:09:18.013 Got JSON-RPC error response 00:09:18.013 response: 00:09:18.013 { 00:09:18.013 "code": -32602, 00:09:18.013 "message": "Invalid cntlid range [1-0]" 00:09:18.013 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.013 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32507 -I 65520 00:09:18.272 [2024-07-15 11:24:52.510414] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32507: invalid cntlid range [1-65520] 00:09:18.272 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:18.272 { 00:09:18.272 "nqn": "nqn.2016-06.io.spdk:cnode32507", 00:09:18.272 "max_cntlid": 65520, 00:09:18.272 "method": "nvmf_create_subsystem", 00:09:18.272 "req_id": 1 00:09:18.272 } 00:09:18.272 Got JSON-RPC error response 00:09:18.272 response: 00:09:18.272 { 00:09:18.272 "code": -32602, 00:09:18.272 "message": "Invalid cntlid range [1-65520]" 00:09:18.272 }' 00:09:18.272 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:18.272 { 00:09:18.272 "nqn": "nqn.2016-06.io.spdk:cnode32507", 00:09:18.272 "max_cntlid": 65520, 00:09:18.272 "method": "nvmf_create_subsystem", 00:09:18.272 "req_id": 1 00:09:18.272 } 00:09:18.272 Got JSON-RPC error response 00:09:18.272 response: 00:09:18.272 { 00:09:18.272 "code": -32602, 00:09:18.272 "message": "Invalid cntlid range [1-65520]" 00:09:18.272 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.272 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30845 -i 6 -I 5 00:09:18.534 [2024-07-15 11:24:52.775484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30845: invalid cntlid range [6-5] 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:18.534 { 00:09:18.534 "nqn": "nqn.2016-06.io.spdk:cnode30845", 00:09:18.534 "min_cntlid": 6, 00:09:18.534 "max_cntlid": 5, 00:09:18.534 "method": "nvmf_create_subsystem", 00:09:18.534 "req_id": 1 00:09:18.534 } 00:09:18.534 Got JSON-RPC error response 00:09:18.534 response: 00:09:18.534 { 00:09:18.534 "code": -32602, 00:09:18.534 "message": "Invalid cntlid range [6-5]" 00:09:18.534 }' 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:18.534 { 00:09:18.534 "nqn": "nqn.2016-06.io.spdk:cnode30845", 00:09:18.534 "min_cntlid": 6, 00:09:18.534 "max_cntlid": 5, 00:09:18.534 "method": "nvmf_create_subsystem", 00:09:18.534 "req_id": 1 00:09:18.534 } 00:09:18.534 Got JSON-RPC error response 00:09:18.534 response: 00:09:18.534 { 00:09:18.534 "code": -32602, 00:09:18.534 "message": "Invalid cntlid range [6-5]" 00:09:18.534 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:18.534 { 00:09:18.534 "name": "foobar", 00:09:18.534 "method": "nvmf_delete_target", 00:09:18.534 "req_id": 1 00:09:18.534 } 00:09:18.534 Got JSON-RPC error response 00:09:18.534 response: 00:09:18.534 { 00:09:18.534 "code": -32602, 00:09:18.534 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:18.534 }' 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:18.534 { 00:09:18.534 "name": "foobar", 00:09:18.534 "method": "nvmf_delete_target", 00:09:18.534 "req_id": 1 00:09:18.534 } 00:09:18.534 Got JSON-RPC error response 00:09:18.534 response: 00:09:18.534 { 00:09:18.534 "code": -32602, 00:09:18.534 "message": "The specified target doesn't exist, cannot delete it." 00:09:18.534 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.534 11:24:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.534 rmmod nvme_tcp 00:09:18.534 rmmod nvme_fabrics 00:09:18.534 rmmod nvme_keyring 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2657144 ']' 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2657144 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2657144 ']' 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2657144 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2657144 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2657144' 00:09:18.793 killing process with pid 2657144 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2657144 00:09:18.793 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2657144 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.052 11:24:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.958 11:24:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.958 00:09:20.958 real 0m12.565s 00:09:20.958 user 0m22.050s 00:09:20.958 sys 0m5.418s 00:09:20.958 11:24:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.958 11:24:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:20.958 ************************************ 00:09:20.958 END TEST nvmf_invalid 00:09:20.958 ************************************ 00:09:20.958 11:24:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.958 11:24:55 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:20.958 11:24:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.958 11:24:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.958 11:24:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.958 ************************************ 00:09:20.958 START TEST nvmf_abort 00:09:20.958 ************************************ 00:09:20.958 11:24:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.216 * Looking for test storage... 00:09:21.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.216 11:24:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.782 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.782 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.782 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.782 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:27.782 00:09:27.782 --- 10.0.0.2 ping statistics --- 00:09:27.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.782 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:27.782 00:09:27.782 --- 10.0.0.1 ping statistics --- 00:09:27.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.782 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2661682 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2661682 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2661682 ']' 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.782 11:25:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.782 [2024-07-15 11:25:01.459240] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:09:27.782 [2024-07-15 11:25:01.459306] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.782 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.782 [2024-07-15 11:25:01.546720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.782 [2024-07-15 11:25:01.654781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.782 [2024-07-15 11:25:01.654827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.782 [2024-07-15 11:25:01.654841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.782 [2024-07-15 11:25:01.654852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.782 [2024-07-15 11:25:01.654867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.782 [2024-07-15 11:25:01.654935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.782 [2024-07-15 11:25:01.654966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.782 [2024-07-15 11:25:01.654968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.041 [2024-07-15 11:25:02.457986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.041 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 Malloc0 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 Delay0 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 [2024-07-15 11:25:02.553704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.301 11:25:02 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:28.301 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.301 [2024-07-15 11:25:02.725427] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:30.833 Initializing NVMe Controllers 00:09:30.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:30.833 controller IO queue size 128 less than required 00:09:30.833 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:30.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:30.833 Initialization complete. Launching workers. 00:09:30.833 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29218 00:09:30.833 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29279, failed to submit 62 00:09:30.833 success 29222, unsuccess 57, failed 0 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.833 rmmod nvme_tcp 00:09:30.833 rmmod nvme_fabrics 00:09:30.833 rmmod nvme_keyring 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2661682 ']' 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2661682 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2661682 ']' 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2661682 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2661682 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2661682' 00:09:30.833 killing process with pid 2661682 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2661682 00:09:30.833 11:25:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2661682 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.833 11:25:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.371 11:25:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.371 00:09:33.371 real 0m11.883s 00:09:33.371 user 0m14.142s 00:09:33.371 sys 0m5.443s 00:09:33.371 11:25:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.371 11:25:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 ************************************ 00:09:33.371 END TEST nvmf_abort 00:09:33.371 ************************************ 00:09:33.371 11:25:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.371 11:25:07 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.371 11:25:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.371 11:25:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.371 11:25:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 ************************************ 00:09:33.371 START TEST nvmf_ns_hotplug_stress 00:09:33.371 ************************************ 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.371 * Looking for test storage... 00:09:33.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.371 11:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:38.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:38.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:38.647 Found net devices under 0000:af:00.0: cvl_0_0 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.647 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:38.647 Found net devices under 0000:af:00.1: cvl_0_1 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.648 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:09:38.908 00:09:38.908 --- 10.0.0.2 ping statistics --- 00:09:38.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.908 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:38.908 00:09:38.908 --- 10.0.0.1 ping statistics --- 00:09:38.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.908 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2666032 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2666032 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2666032 ']' 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.908 11:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.167 [2024-07-15 11:25:13.425405] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:09:39.167 [2024-07-15 11:25:13.425469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.167 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.167 [2024-07-15 11:25:13.513317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.167 [2024-07-15 11:25:13.618839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.167 [2024-07-15 11:25:13.618890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.167 [2024-07-15 11:25:13.618903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.167 [2024-07-15 11:25:13.618914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.167 [2024-07-15 11:25:13.618925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.167 [2024-07-15 11:25:13.619052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.167 [2024-07-15 11:25:13.619164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.167 [2024-07-15 11:25:13.619166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.102 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.102 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:40.102 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.103 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.103 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.103 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.103 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:40.103 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:40.361 [2024-07-15 11:25:14.637432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.361 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:40.619 11:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.878 [2024-07-15 11:25:15.173210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.878 11:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.137 11:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:41.395 Malloc0 00:09:41.396 11:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.654 Delay0 00:09:41.654 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.912 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:42.171 NULL1 00:09:42.171 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:42.430 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2666653 00:09:42.430 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:42.430 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:42.430 11:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.430 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.807 Read completed with error (sct=0, sc=11) 00:09:43.807 11:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.807 11:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:43.807 11:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:44.065 true 00:09:44.065 11:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:44.065 11:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.000 11:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.259 11:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:45.259 11:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:45.518 true 00:09:45.518 11:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:45.518 11:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.776 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.035 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:46.035 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:46.294 true 00:09:46.294 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:46.294 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.553 11:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.811 11:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:46.812 11:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:47.070 true 00:09:47.070 11:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:47.070 11:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.006 11:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.265 11:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:48.265 11:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:48.524 true 00:09:48.524 11:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:48.524 11:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.783 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.040 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:49.040 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:49.297 true 00:09:49.297 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:49.297 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.555 11:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.813 11:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:49.813 11:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:50.072 true 00:09:50.072 11:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:50.072 11:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.453 11:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.453 11:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:51.453 11:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:51.757 true 00:09:51.757 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:51.757 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.047 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.312 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:52.312 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:52.570 true 00:09:52.570 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:52.570 11:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.507 11:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.507 11:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:53.507 11:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:53.765 true 00:09:53.765 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:53.765 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.024 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.283 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:54.283 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:54.542 true 00:09:54.542 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:54.542 11:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.479 11:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.738 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:55.738 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:55.996 true 00:09:55.996 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:55.996 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.255 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.514 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:56.514 11:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:56.773 true 00:09:56.773 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:56.773 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.032 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.291 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:57.291 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:57.553 true 00:09:57.553 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:57.553 11:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.935 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.935 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:58.935 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:59.201 true 00:09:59.201 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:59.201 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.459 11:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.717 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:59.717 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:59.975 true 00:09:59.975 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:09:59.975 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.234 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.492 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:00.492 11:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:00.750 true 00:10:00.750 11:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:00.750 11:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.684 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.942 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:01.942 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:02.200 true 00:10:02.200 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:02.200 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.458 11:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.716 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:02.716 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:02.972 true 00:10:02.972 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:02.972 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.229 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.486 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:03.486 11:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:03.745 true 00:10:03.745 11:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:03.745 11:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.117 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.117 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:05.117 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:05.375 true 00:10:05.375 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:05.375 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.633 11:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.891 11:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:05.891 11:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:06.149 true 00:10:06.149 11:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:06.149 11:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.408 11:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.666 11:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:06.666 11:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:06.924 true 00:10:06.924 11:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:06.924 11:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.859 11:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.118 11:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:08.118 11:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:08.375 true 00:10:08.375 11:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:08.375 11:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.634 11:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.905 11:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:08.905 11:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:09.170 true 00:10:09.170 11:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:09.170 11:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.104 11:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.620 11:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:10.620 11:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:10.620 true 00:10:10.879 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:10.879 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.138 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.397 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:11.397 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:11.397 true 00:10:11.656 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:11.656 11:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.593 11:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.593 Initializing NVMe Controllers 00:10:12.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.593 Controller IO queue size 128, less than required. 00:10:12.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:12.593 Controller IO queue size 128, less than required. 00:10:12.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:12.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:12.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:12.593 Initialization complete. Launching workers. 00:10:12.593 ======================================================== 00:10:12.593 Latency(us) 00:10:12.593 Device Information : IOPS MiB/s Average min max 00:10:12.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 463.27 0.23 126285.71 4225.01 1023532.95 00:10:12.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3886.74 1.90 32934.18 6590.81 458351.61 00:10:12.593 ======================================================== 00:10:12.593 Total : 4350.00 2.12 42875.91 4225.01 1023532.95 00:10:12.593 00:10:12.852 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:12.852 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:12.852 true 00:10:13.111 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2666653 00:10:13.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2666653) - No such process 00:10:13.111 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2666653 00:10:13.111 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.371 11:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:13.630 null0 00:10:13.630 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.630 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.630 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:13.889 null1 00:10:13.889 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.889 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.889 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:14.149 null2 00:10:14.408 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.408 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.408 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:14.408 null3 00:10:14.667 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.667 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.667 11:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:14.667 null4 00:10:14.926 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.926 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.926 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:14.926 null5 00:10:15.184 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.184 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.184 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:15.184 null6 00:10:15.443 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.443 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.443 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:15.443 null7 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2672748 2672749 2672751 2672753 2672756 2672758 2672760 2672762 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.703 11:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.703 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.963 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.223 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.483 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.741 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.742 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.742 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.742 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.742 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.742 11:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.742 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.001 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.259 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.260 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.260 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.260 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.519 11:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.778 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.038 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.297 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.557 11:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.557 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.817 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.077 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.335 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.594 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.594 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.594 11:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.594 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.853 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.111 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.367 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.625 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.625 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.625 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.625 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.625 11:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.625 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.930 rmmod nvme_tcp 00:10:20.930 rmmod nvme_fabrics 00:10:20.930 rmmod nvme_keyring 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2666032 ']' 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2666032 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2666032 ']' 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2666032 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666032 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666032' 00:10:20.930 killing process with pid 2666032 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2666032 00:10:20.930 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2666032 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.272 11:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.176 11:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:23.176 00:10:23.176 real 0m50.266s 00:10:23.176 user 3m32.646s 00:10:23.176 sys 0m16.212s 00:10:23.176 11:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.176 11:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.176 ************************************ 00:10:23.176 END TEST nvmf_ns_hotplug_stress 00:10:23.176 ************************************ 00:10:23.434 11:25:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.434 11:25:57 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.434 11:25:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.434 11:25:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.434 11:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.434 ************************************ 00:10:23.434 START TEST nvmf_connect_stress 00:10:23.434 ************************************ 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.434 * Looking for test storage... 00:10:23.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:23.434 11:25:57 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.435 11:25:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.034 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.034 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:30.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:30.035 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:30.035 Found net devices under 0000:af:00.0: cvl_0_0 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:30.035 Found net devices under 0000:af:00.1: cvl_0_1 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:30.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:10:30.035 00:10:30.035 --- 10.0.0.2 ping statistics --- 00:10:30.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.035 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:10:30.035 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:30.035 00:10:30.035 --- 10.0.0.1 ping statistics --- 00:10:30.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.035 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2677432 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2677432 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2677432 ']' 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.036 11:26:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.036 [2024-07-15 11:26:03.885949] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:10:30.036 [2024-07-15 11:26:03.885989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.036 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.036 [2024-07-15 11:26:03.962684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.036 [2024-07-15 11:26:04.066512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.036 [2024-07-15 11:26:04.066562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.036 [2024-07-15 11:26:04.066576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.036 [2024-07-15 11:26:04.066587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.036 [2024-07-15 11:26:04.066597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.036 [2024-07-15 11:26:04.066726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.036 [2024-07-15 11:26:04.066764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.036 [2024-07-15 11:26:04.066766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.604 [2024-07-15 11:26:04.806296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.604 [2024-07-15 11:26:04.842530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.604 NULL1 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2677708 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.604 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.605 11:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.864 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.864 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:30.864 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.864 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.864 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.496 11:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.063 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.063 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:32.063 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.063 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.063 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.321 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.321 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:32.321 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.321 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.321 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.579 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.579 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:32.579 11:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.579 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.579 11:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.836 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.836 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:32.836 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.836 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.836 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.095 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.095 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:33.095 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.095 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.095 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.660 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.660 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:33.660 11:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.660 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.660 11:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.917 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.917 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:33.917 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.917 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.917 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.174 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.174 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:34.174 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.174 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.174 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.433 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.433 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:34.433 11:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.433 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.433 11:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.000 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.000 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:35.000 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.000 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.000 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.258 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.258 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:35.258 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.258 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.258 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.517 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.517 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:35.517 11:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.517 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.517 11:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.776 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.776 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:35.776 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.776 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.776 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.034 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.034 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:36.034 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.034 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.034 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.602 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.602 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:36.602 11:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.602 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.602 11:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.860 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.860 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:36.860 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.860 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.860 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.117 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.117 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:37.117 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.117 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.117 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.374 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.374 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:37.374 11:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.374 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.374 11:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.939 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:37.939 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.939 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.939 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.196 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.196 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:38.196 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.196 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.196 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.454 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.454 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:38.454 11:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.454 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.454 11:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.712 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.712 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:38.712 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.712 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.712 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.968 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.968 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:38.968 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.968 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.968 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.534 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.534 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:39.534 11:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.534 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.534 11:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.791 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.791 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:39.791 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.791 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.791 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.049 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.049 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:40.049 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.049 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.049 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.307 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.307 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:40.307 11:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.307 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.307 11:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.565 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.824 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.824 11:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2677708 00:10:40.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2677708) - No such process 00:10:40.824 11:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2677708 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.825 rmmod nvme_tcp 00:10:40.825 rmmod nvme_fabrics 00:10:40.825 rmmod nvme_keyring 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2677432 ']' 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2677432 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2677432 ']' 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2677432 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2677432 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2677432' 00:10:40.825 killing process with pid 2677432 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2677432 00:10:40.825 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2677432 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.085 11:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.615 11:26:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.615 00:10:43.615 real 0m19.834s 00:10:43.615 user 0m41.905s 00:10:43.615 sys 0m8.248s 00:10:43.615 11:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.615 11:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 ************************************ 00:10:43.615 END TEST nvmf_connect_stress 00:10:43.615 ************************************ 00:10:43.615 11:26:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.615 11:26:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:43.615 11:26:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.615 11:26:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.615 11:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 ************************************ 00:10:43.615 START TEST nvmf_fused_ordering 00:10:43.615 ************************************ 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:43.615 * Looking for test storage... 00:10:43.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.615 11:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.616 11:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:49.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:49.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:49.021 Found net devices under 0000:af:00.0: cvl_0_0 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.021 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:49.022 Found net devices under 0000:af:00.1: cvl_0_1 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.022 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:10:49.280 00:10:49.280 --- 10.0.0.2 ping statistics --- 00:10:49.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.280 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:49.280 00:10:49.280 --- 10.0.0.1 ping statistics --- 00:10:49.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.280 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2683197 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2683197 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2683197 ']' 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.280 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.281 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.281 11:26:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:49.281 [2024-07-15 11:26:23.679644] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:10:49.281 [2024-07-15 11:26:23.679752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.539 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.539 [2024-07-15 11:26:23.805278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.539 [2024-07-15 11:26:23.911220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.539 [2024-07-15 11:26:23.911268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.539 [2024-07-15 11:26:23.911282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.539 [2024-07-15 11:26:23.911293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.539 [2024-07-15 11:26:23.911303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.539 [2024-07-15 11:26:23.911336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 [2024-07-15 11:26:24.634895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 [2024-07-15 11:26:24.655055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 NULL1 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.477 11:26:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:50.477 [2024-07-15 11:26:24.708132] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:10:50.477 [2024-07-15 11:26:24.708167] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683307 ] 00:10:50.477 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.046 Attached to nqn.2016-06.io.spdk:cnode1 00:10:51.046 Namespace ID: 1 size: 1GB 00:10:51.046 fused_ordering(0) 00:10:51.046 fused_ordering(1) 00:10:51.046 fused_ordering(2) 00:10:51.046 fused_ordering(3) 00:10:51.046 fused_ordering(4) 00:10:51.046 fused_ordering(5) 00:10:51.046 fused_ordering(6) 00:10:51.046 fused_ordering(7) 00:10:51.046 fused_ordering(8) 00:10:51.046 fused_ordering(9) 00:10:51.046 fused_ordering(10) 00:10:51.046 fused_ordering(11) 00:10:51.046 fused_ordering(12) 00:10:51.046 fused_ordering(13) 00:10:51.046 fused_ordering(14) 00:10:51.046 fused_ordering(15) 00:10:51.046 fused_ordering(16) 00:10:51.046 fused_ordering(17) 00:10:51.046 fused_ordering(18) 00:10:51.046 fused_ordering(19) 00:10:51.046 fused_ordering(20) 00:10:51.046 fused_ordering(21) 00:10:51.046 fused_ordering(22) 00:10:51.046 fused_ordering(23) 00:10:51.046 fused_ordering(24) 00:10:51.046 fused_ordering(25) 00:10:51.046 fused_ordering(26) 00:10:51.046 fused_ordering(27) 00:10:51.046 fused_ordering(28) 00:10:51.046 fused_ordering(29) 00:10:51.046 fused_ordering(30) 00:10:51.046 fused_ordering(31) 00:10:51.046 fused_ordering(32) 00:10:51.046 fused_ordering(33) 00:10:51.046 fused_ordering(34) 00:10:51.046 fused_ordering(35) 00:10:51.046 fused_ordering(36) 00:10:51.046 fused_ordering(37) 00:10:51.046 fused_ordering(38) 00:10:51.046 fused_ordering(39) 00:10:51.046 fused_ordering(40) 00:10:51.046 fused_ordering(41) 00:10:51.046 fused_ordering(42) 00:10:51.046 fused_ordering(43) 00:10:51.046 fused_ordering(44) 00:10:51.046 fused_ordering(45) 00:10:51.046 fused_ordering(46) 00:10:51.046 fused_ordering(47) 00:10:51.046 fused_ordering(48) 00:10:51.046 fused_ordering(49) 00:10:51.046 fused_ordering(50) 00:10:51.046 fused_ordering(51) 00:10:51.046 fused_ordering(52) 00:10:51.046 fused_ordering(53) 00:10:51.046 fused_ordering(54) 00:10:51.046 fused_ordering(55) 00:10:51.046 fused_ordering(56) 00:10:51.046 fused_ordering(57) 00:10:51.046 fused_ordering(58) 00:10:51.046 fused_ordering(59) 00:10:51.046 fused_ordering(60) 00:10:51.046 fused_ordering(61) 00:10:51.046 fused_ordering(62) 00:10:51.046 fused_ordering(63) 00:10:51.046 fused_ordering(64) 00:10:51.046 fused_ordering(65) 00:10:51.046 fused_ordering(66) 00:10:51.046 fused_ordering(67) 00:10:51.046 fused_ordering(68) 00:10:51.046 fused_ordering(69) 00:10:51.046 fused_ordering(70) 00:10:51.046 fused_ordering(71) 00:10:51.046 fused_ordering(72) 00:10:51.046 fused_ordering(73) 00:10:51.046 fused_ordering(74) 00:10:51.046 fused_ordering(75) 00:10:51.046 fused_ordering(76) 00:10:51.046 fused_ordering(77) 00:10:51.046 fused_ordering(78) 00:10:51.046 fused_ordering(79) 00:10:51.046 fused_ordering(80) 00:10:51.046 fused_ordering(81) 00:10:51.046 fused_ordering(82) 00:10:51.046 fused_ordering(83) 00:10:51.046 fused_ordering(84) 00:10:51.046 fused_ordering(85) 00:10:51.046 fused_ordering(86) 00:10:51.046 fused_ordering(87) 00:10:51.046 fused_ordering(88) 00:10:51.046 fused_ordering(89) 00:10:51.046 fused_ordering(90) 00:10:51.046 fused_ordering(91) 00:10:51.046 fused_ordering(92) 00:10:51.046 fused_ordering(93) 00:10:51.046 fused_ordering(94) 00:10:51.046 fused_ordering(95) 00:10:51.046 fused_ordering(96) 00:10:51.046 fused_ordering(97) 00:10:51.046 fused_ordering(98) 00:10:51.046 fused_ordering(99) 00:10:51.046 fused_ordering(100) 00:10:51.046 fused_ordering(101) 00:10:51.046 fused_ordering(102) 00:10:51.046 fused_ordering(103) 00:10:51.046 fused_ordering(104) 00:10:51.046 fused_ordering(105) 00:10:51.046 fused_ordering(106) 00:10:51.046 fused_ordering(107) 00:10:51.046 fused_ordering(108) 00:10:51.046 fused_ordering(109) 00:10:51.046 fused_ordering(110) 00:10:51.046 fused_ordering(111) 00:10:51.046 fused_ordering(112) 00:10:51.046 fused_ordering(113) 00:10:51.046 fused_ordering(114) 00:10:51.046 fused_ordering(115) 00:10:51.046 fused_ordering(116) 00:10:51.046 fused_ordering(117) 00:10:51.046 fused_ordering(118) 00:10:51.046 fused_ordering(119) 00:10:51.046 fused_ordering(120) 00:10:51.046 fused_ordering(121) 00:10:51.046 fused_ordering(122) 00:10:51.046 fused_ordering(123) 00:10:51.046 fused_ordering(124) 00:10:51.046 fused_ordering(125) 00:10:51.046 fused_ordering(126) 00:10:51.046 fused_ordering(127) 00:10:51.046 fused_ordering(128) 00:10:51.046 fused_ordering(129) 00:10:51.046 fused_ordering(130) 00:10:51.046 fused_ordering(131) 00:10:51.046 fused_ordering(132) 00:10:51.046 fused_ordering(133) 00:10:51.046 fused_ordering(134) 00:10:51.046 fused_ordering(135) 00:10:51.046 fused_ordering(136) 00:10:51.046 fused_ordering(137) 00:10:51.046 fused_ordering(138) 00:10:51.046 fused_ordering(139) 00:10:51.046 fused_ordering(140) 00:10:51.046 fused_ordering(141) 00:10:51.046 fused_ordering(142) 00:10:51.046 fused_ordering(143) 00:10:51.046 fused_ordering(144) 00:10:51.046 fused_ordering(145) 00:10:51.046 fused_ordering(146) 00:10:51.046 fused_ordering(147) 00:10:51.046 fused_ordering(148) 00:10:51.046 fused_ordering(149) 00:10:51.046 fused_ordering(150) 00:10:51.046 fused_ordering(151) 00:10:51.046 fused_ordering(152) 00:10:51.046 fused_ordering(153) 00:10:51.046 fused_ordering(154) 00:10:51.046 fused_ordering(155) 00:10:51.046 fused_ordering(156) 00:10:51.046 fused_ordering(157) 00:10:51.046 fused_ordering(158) 00:10:51.046 fused_ordering(159) 00:10:51.046 fused_ordering(160) 00:10:51.046 fused_ordering(161) 00:10:51.046 fused_ordering(162) 00:10:51.046 fused_ordering(163) 00:10:51.046 fused_ordering(164) 00:10:51.046 fused_ordering(165) 00:10:51.046 fused_ordering(166) 00:10:51.046 fused_ordering(167) 00:10:51.046 fused_ordering(168) 00:10:51.046 fused_ordering(169) 00:10:51.046 fused_ordering(170) 00:10:51.046 fused_ordering(171) 00:10:51.046 fused_ordering(172) 00:10:51.046 fused_ordering(173) 00:10:51.046 fused_ordering(174) 00:10:51.046 fused_ordering(175) 00:10:51.046 fused_ordering(176) 00:10:51.046 fused_ordering(177) 00:10:51.046 fused_ordering(178) 00:10:51.046 fused_ordering(179) 00:10:51.046 fused_ordering(180) 00:10:51.046 fused_ordering(181) 00:10:51.046 fused_ordering(182) 00:10:51.046 fused_ordering(183) 00:10:51.046 fused_ordering(184) 00:10:51.046 fused_ordering(185) 00:10:51.046 fused_ordering(186) 00:10:51.046 fused_ordering(187) 00:10:51.046 fused_ordering(188) 00:10:51.046 fused_ordering(189) 00:10:51.046 fused_ordering(190) 00:10:51.046 fused_ordering(191) 00:10:51.046 fused_ordering(192) 00:10:51.046 fused_ordering(193) 00:10:51.046 fused_ordering(194) 00:10:51.046 fused_ordering(195) 00:10:51.046 fused_ordering(196) 00:10:51.046 fused_ordering(197) 00:10:51.046 fused_ordering(198) 00:10:51.046 fused_ordering(199) 00:10:51.047 fused_ordering(200) 00:10:51.047 fused_ordering(201) 00:10:51.047 fused_ordering(202) 00:10:51.047 fused_ordering(203) 00:10:51.047 fused_ordering(204) 00:10:51.047 fused_ordering(205) 00:10:51.306 fused_ordering(206) 00:10:51.306 fused_ordering(207) 00:10:51.306 fused_ordering(208) 00:10:51.306 fused_ordering(209) 00:10:51.306 fused_ordering(210) 00:10:51.306 fused_ordering(211) 00:10:51.306 fused_ordering(212) 00:10:51.306 fused_ordering(213) 00:10:51.306 fused_ordering(214) 00:10:51.306 fused_ordering(215) 00:10:51.306 fused_ordering(216) 00:10:51.306 fused_ordering(217) 00:10:51.306 fused_ordering(218) 00:10:51.306 fused_ordering(219) 00:10:51.306 fused_ordering(220) 00:10:51.306 fused_ordering(221) 00:10:51.306 fused_ordering(222) 00:10:51.306 fused_ordering(223) 00:10:51.306 fused_ordering(224) 00:10:51.306 fused_ordering(225) 00:10:51.306 fused_ordering(226) 00:10:51.306 fused_ordering(227) 00:10:51.306 fused_ordering(228) 00:10:51.306 fused_ordering(229) 00:10:51.306 fused_ordering(230) 00:10:51.306 fused_ordering(231) 00:10:51.306 fused_ordering(232) 00:10:51.306 fused_ordering(233) 00:10:51.306 fused_ordering(234) 00:10:51.306 fused_ordering(235) 00:10:51.306 fused_ordering(236) 00:10:51.306 fused_ordering(237) 00:10:51.306 fused_ordering(238) 00:10:51.306 fused_ordering(239) 00:10:51.306 fused_ordering(240) 00:10:51.306 fused_ordering(241) 00:10:51.306 fused_ordering(242) 00:10:51.306 fused_ordering(243) 00:10:51.306 fused_ordering(244) 00:10:51.306 fused_ordering(245) 00:10:51.306 fused_ordering(246) 00:10:51.306 fused_ordering(247) 00:10:51.306 fused_ordering(248) 00:10:51.306 fused_ordering(249) 00:10:51.306 fused_ordering(250) 00:10:51.306 fused_ordering(251) 00:10:51.306 fused_ordering(252) 00:10:51.306 fused_ordering(253) 00:10:51.306 fused_ordering(254) 00:10:51.306 fused_ordering(255) 00:10:51.306 fused_ordering(256) 00:10:51.306 fused_ordering(257) 00:10:51.306 fused_ordering(258) 00:10:51.306 fused_ordering(259) 00:10:51.306 fused_ordering(260) 00:10:51.306 fused_ordering(261) 00:10:51.306 fused_ordering(262) 00:10:51.306 fused_ordering(263) 00:10:51.306 fused_ordering(264) 00:10:51.306 fused_ordering(265) 00:10:51.306 fused_ordering(266) 00:10:51.306 fused_ordering(267) 00:10:51.306 fused_ordering(268) 00:10:51.306 fused_ordering(269) 00:10:51.306 fused_ordering(270) 00:10:51.306 fused_ordering(271) 00:10:51.306 fused_ordering(272) 00:10:51.306 fused_ordering(273) 00:10:51.306 fused_ordering(274) 00:10:51.306 fused_ordering(275) 00:10:51.306 fused_ordering(276) 00:10:51.306 fused_ordering(277) 00:10:51.306 fused_ordering(278) 00:10:51.306 fused_ordering(279) 00:10:51.306 fused_ordering(280) 00:10:51.306 fused_ordering(281) 00:10:51.306 fused_ordering(282) 00:10:51.306 fused_ordering(283) 00:10:51.306 fused_ordering(284) 00:10:51.306 fused_ordering(285) 00:10:51.306 fused_ordering(286) 00:10:51.306 fused_ordering(287) 00:10:51.306 fused_ordering(288) 00:10:51.306 fused_ordering(289) 00:10:51.306 fused_ordering(290) 00:10:51.306 fused_ordering(291) 00:10:51.306 fused_ordering(292) 00:10:51.306 fused_ordering(293) 00:10:51.306 fused_ordering(294) 00:10:51.306 fused_ordering(295) 00:10:51.306 fused_ordering(296) 00:10:51.306 fused_ordering(297) 00:10:51.306 fused_ordering(298) 00:10:51.306 fused_ordering(299) 00:10:51.306 fused_ordering(300) 00:10:51.306 fused_ordering(301) 00:10:51.306 fused_ordering(302) 00:10:51.306 fused_ordering(303) 00:10:51.306 fused_ordering(304) 00:10:51.306 fused_ordering(305) 00:10:51.306 fused_ordering(306) 00:10:51.306 fused_ordering(307) 00:10:51.306 fused_ordering(308) 00:10:51.306 fused_ordering(309) 00:10:51.306 fused_ordering(310) 00:10:51.306 fused_ordering(311) 00:10:51.306 fused_ordering(312) 00:10:51.306 fused_ordering(313) 00:10:51.306 fused_ordering(314) 00:10:51.306 fused_ordering(315) 00:10:51.306 fused_ordering(316) 00:10:51.306 fused_ordering(317) 00:10:51.306 fused_ordering(318) 00:10:51.306 fused_ordering(319) 00:10:51.306 fused_ordering(320) 00:10:51.306 fused_ordering(321) 00:10:51.306 fused_ordering(322) 00:10:51.306 fused_ordering(323) 00:10:51.306 fused_ordering(324) 00:10:51.306 fused_ordering(325) 00:10:51.306 fused_ordering(326) 00:10:51.306 fused_ordering(327) 00:10:51.306 fused_ordering(328) 00:10:51.306 fused_ordering(329) 00:10:51.306 fused_ordering(330) 00:10:51.306 fused_ordering(331) 00:10:51.306 fused_ordering(332) 00:10:51.306 fused_ordering(333) 00:10:51.306 fused_ordering(334) 00:10:51.306 fused_ordering(335) 00:10:51.306 fused_ordering(336) 00:10:51.306 fused_ordering(337) 00:10:51.306 fused_ordering(338) 00:10:51.306 fused_ordering(339) 00:10:51.306 fused_ordering(340) 00:10:51.306 fused_ordering(341) 00:10:51.306 fused_ordering(342) 00:10:51.306 fused_ordering(343) 00:10:51.306 fused_ordering(344) 00:10:51.306 fused_ordering(345) 00:10:51.306 fused_ordering(346) 00:10:51.306 fused_ordering(347) 00:10:51.306 fused_ordering(348) 00:10:51.306 fused_ordering(349) 00:10:51.306 fused_ordering(350) 00:10:51.306 fused_ordering(351) 00:10:51.306 fused_ordering(352) 00:10:51.306 fused_ordering(353) 00:10:51.306 fused_ordering(354) 00:10:51.306 fused_ordering(355) 00:10:51.306 fused_ordering(356) 00:10:51.306 fused_ordering(357) 00:10:51.306 fused_ordering(358) 00:10:51.306 fused_ordering(359) 00:10:51.306 fused_ordering(360) 00:10:51.306 fused_ordering(361) 00:10:51.306 fused_ordering(362) 00:10:51.306 fused_ordering(363) 00:10:51.306 fused_ordering(364) 00:10:51.306 fused_ordering(365) 00:10:51.306 fused_ordering(366) 00:10:51.306 fused_ordering(367) 00:10:51.306 fused_ordering(368) 00:10:51.306 fused_ordering(369) 00:10:51.306 fused_ordering(370) 00:10:51.306 fused_ordering(371) 00:10:51.306 fused_ordering(372) 00:10:51.306 fused_ordering(373) 00:10:51.306 fused_ordering(374) 00:10:51.306 fused_ordering(375) 00:10:51.306 fused_ordering(376) 00:10:51.306 fused_ordering(377) 00:10:51.306 fused_ordering(378) 00:10:51.306 fused_ordering(379) 00:10:51.306 fused_ordering(380) 00:10:51.306 fused_ordering(381) 00:10:51.306 fused_ordering(382) 00:10:51.306 fused_ordering(383) 00:10:51.306 fused_ordering(384) 00:10:51.306 fused_ordering(385) 00:10:51.306 fused_ordering(386) 00:10:51.306 fused_ordering(387) 00:10:51.306 fused_ordering(388) 00:10:51.306 fused_ordering(389) 00:10:51.306 fused_ordering(390) 00:10:51.306 fused_ordering(391) 00:10:51.306 fused_ordering(392) 00:10:51.306 fused_ordering(393) 00:10:51.306 fused_ordering(394) 00:10:51.306 fused_ordering(395) 00:10:51.306 fused_ordering(396) 00:10:51.306 fused_ordering(397) 00:10:51.306 fused_ordering(398) 00:10:51.306 fused_ordering(399) 00:10:51.306 fused_ordering(400) 00:10:51.306 fused_ordering(401) 00:10:51.306 fused_ordering(402) 00:10:51.306 fused_ordering(403) 00:10:51.306 fused_ordering(404) 00:10:51.306 fused_ordering(405) 00:10:51.306 fused_ordering(406) 00:10:51.306 fused_ordering(407) 00:10:51.306 fused_ordering(408) 00:10:51.306 fused_ordering(409) 00:10:51.306 fused_ordering(410) 00:10:51.875 fused_ordering(411) 00:10:51.875 fused_ordering(412) 00:10:51.875 fused_ordering(413) 00:10:51.875 fused_ordering(414) 00:10:51.875 fused_ordering(415) 00:10:51.875 fused_ordering(416) 00:10:51.875 fused_ordering(417) 00:10:51.875 fused_ordering(418) 00:10:51.875 fused_ordering(419) 00:10:51.875 fused_ordering(420) 00:10:51.875 fused_ordering(421) 00:10:51.875 fused_ordering(422) 00:10:51.875 fused_ordering(423) 00:10:51.875 fused_ordering(424) 00:10:51.875 fused_ordering(425) 00:10:51.875 fused_ordering(426) 00:10:51.875 fused_ordering(427) 00:10:51.875 fused_ordering(428) 00:10:51.875 fused_ordering(429) 00:10:51.875 fused_ordering(430) 00:10:51.875 fused_ordering(431) 00:10:51.875 fused_ordering(432) 00:10:51.875 fused_ordering(433) 00:10:51.875 fused_ordering(434) 00:10:51.875 fused_ordering(435) 00:10:51.875 fused_ordering(436) 00:10:51.875 fused_ordering(437) 00:10:51.875 fused_ordering(438) 00:10:51.875 fused_ordering(439) 00:10:51.875 fused_ordering(440) 00:10:51.875 fused_ordering(441) 00:10:51.875 fused_ordering(442) 00:10:51.875 fused_ordering(443) 00:10:51.875 fused_ordering(444) 00:10:51.875 fused_ordering(445) 00:10:51.875 fused_ordering(446) 00:10:51.875 fused_ordering(447) 00:10:51.875 fused_ordering(448) 00:10:51.875 fused_ordering(449) 00:10:51.875 fused_ordering(450) 00:10:51.875 fused_ordering(451) 00:10:51.875 fused_ordering(452) 00:10:51.875 fused_ordering(453) 00:10:51.875 fused_ordering(454) 00:10:51.875 fused_ordering(455) 00:10:51.875 fused_ordering(456) 00:10:51.875 fused_ordering(457) 00:10:51.875 fused_ordering(458) 00:10:51.875 fused_ordering(459) 00:10:51.875 fused_ordering(460) 00:10:51.875 fused_ordering(461) 00:10:51.875 fused_ordering(462) 00:10:51.875 fused_ordering(463) 00:10:51.875 fused_ordering(464) 00:10:51.875 fused_ordering(465) 00:10:51.875 fused_ordering(466) 00:10:51.875 fused_ordering(467) 00:10:51.875 fused_ordering(468) 00:10:51.875 fused_ordering(469) 00:10:51.875 fused_ordering(470) 00:10:51.875 fused_ordering(471) 00:10:51.875 fused_ordering(472) 00:10:51.875 fused_ordering(473) 00:10:51.875 fused_ordering(474) 00:10:51.875 fused_ordering(475) 00:10:51.875 fused_ordering(476) 00:10:51.875 fused_ordering(477) 00:10:51.875 fused_ordering(478) 00:10:51.875 fused_ordering(479) 00:10:51.875 fused_ordering(480) 00:10:51.875 fused_ordering(481) 00:10:51.875 fused_ordering(482) 00:10:51.875 fused_ordering(483) 00:10:51.875 fused_ordering(484) 00:10:51.875 fused_ordering(485) 00:10:51.875 fused_ordering(486) 00:10:51.875 fused_ordering(487) 00:10:51.875 fused_ordering(488) 00:10:51.875 fused_ordering(489) 00:10:51.875 fused_ordering(490) 00:10:51.875 fused_ordering(491) 00:10:51.875 fused_ordering(492) 00:10:51.875 fused_ordering(493) 00:10:51.875 fused_ordering(494) 00:10:51.875 fused_ordering(495) 00:10:51.875 fused_ordering(496) 00:10:51.875 fused_ordering(497) 00:10:51.875 fused_ordering(498) 00:10:51.875 fused_ordering(499) 00:10:51.875 fused_ordering(500) 00:10:51.875 fused_ordering(501) 00:10:51.875 fused_ordering(502) 00:10:51.875 fused_ordering(503) 00:10:51.875 fused_ordering(504) 00:10:51.875 fused_ordering(505) 00:10:51.875 fused_ordering(506) 00:10:51.875 fused_ordering(507) 00:10:51.875 fused_ordering(508) 00:10:51.875 fused_ordering(509) 00:10:51.875 fused_ordering(510) 00:10:51.875 fused_ordering(511) 00:10:51.875 fused_ordering(512) 00:10:51.875 fused_ordering(513) 00:10:51.875 fused_ordering(514) 00:10:51.875 fused_ordering(515) 00:10:51.875 fused_ordering(516) 00:10:51.875 fused_ordering(517) 00:10:51.875 fused_ordering(518) 00:10:51.875 fused_ordering(519) 00:10:51.875 fused_ordering(520) 00:10:51.875 fused_ordering(521) 00:10:51.875 fused_ordering(522) 00:10:51.875 fused_ordering(523) 00:10:51.875 fused_ordering(524) 00:10:51.875 fused_ordering(525) 00:10:51.875 fused_ordering(526) 00:10:51.875 fused_ordering(527) 00:10:51.875 fused_ordering(528) 00:10:51.875 fused_ordering(529) 00:10:51.875 fused_ordering(530) 00:10:51.875 fused_ordering(531) 00:10:51.875 fused_ordering(532) 00:10:51.875 fused_ordering(533) 00:10:51.875 fused_ordering(534) 00:10:51.875 fused_ordering(535) 00:10:51.875 fused_ordering(536) 00:10:51.875 fused_ordering(537) 00:10:51.875 fused_ordering(538) 00:10:51.875 fused_ordering(539) 00:10:51.875 fused_ordering(540) 00:10:51.875 fused_ordering(541) 00:10:51.875 fused_ordering(542) 00:10:51.875 fused_ordering(543) 00:10:51.875 fused_ordering(544) 00:10:51.875 fused_ordering(545) 00:10:51.875 fused_ordering(546) 00:10:51.875 fused_ordering(547) 00:10:51.875 fused_ordering(548) 00:10:51.875 fused_ordering(549) 00:10:51.875 fused_ordering(550) 00:10:51.875 fused_ordering(551) 00:10:51.875 fused_ordering(552) 00:10:51.875 fused_ordering(553) 00:10:51.875 fused_ordering(554) 00:10:51.875 fused_ordering(555) 00:10:51.875 fused_ordering(556) 00:10:51.875 fused_ordering(557) 00:10:51.875 fused_ordering(558) 00:10:51.875 fused_ordering(559) 00:10:51.875 fused_ordering(560) 00:10:51.875 fused_ordering(561) 00:10:51.875 fused_ordering(562) 00:10:51.875 fused_ordering(563) 00:10:51.875 fused_ordering(564) 00:10:51.875 fused_ordering(565) 00:10:51.875 fused_ordering(566) 00:10:51.875 fused_ordering(567) 00:10:51.875 fused_ordering(568) 00:10:51.875 fused_ordering(569) 00:10:51.875 fused_ordering(570) 00:10:51.875 fused_ordering(571) 00:10:51.875 fused_ordering(572) 00:10:51.876 fused_ordering(573) 00:10:51.876 fused_ordering(574) 00:10:51.876 fused_ordering(575) 00:10:51.876 fused_ordering(576) 00:10:51.876 fused_ordering(577) 00:10:51.876 fused_ordering(578) 00:10:51.876 fused_ordering(579) 00:10:51.876 fused_ordering(580) 00:10:51.876 fused_ordering(581) 00:10:51.876 fused_ordering(582) 00:10:51.876 fused_ordering(583) 00:10:51.876 fused_ordering(584) 00:10:51.876 fused_ordering(585) 00:10:51.876 fused_ordering(586) 00:10:51.876 fused_ordering(587) 00:10:51.876 fused_ordering(588) 00:10:51.876 fused_ordering(589) 00:10:51.876 fused_ordering(590) 00:10:51.876 fused_ordering(591) 00:10:51.876 fused_ordering(592) 00:10:51.876 fused_ordering(593) 00:10:51.876 fused_ordering(594) 00:10:51.876 fused_ordering(595) 00:10:51.876 fused_ordering(596) 00:10:51.876 fused_ordering(597) 00:10:51.876 fused_ordering(598) 00:10:51.876 fused_ordering(599) 00:10:51.876 fused_ordering(600) 00:10:51.876 fused_ordering(601) 00:10:51.876 fused_ordering(602) 00:10:51.876 fused_ordering(603) 00:10:51.876 fused_ordering(604) 00:10:51.876 fused_ordering(605) 00:10:51.876 fused_ordering(606) 00:10:51.876 fused_ordering(607) 00:10:51.876 fused_ordering(608) 00:10:51.876 fused_ordering(609) 00:10:51.876 fused_ordering(610) 00:10:51.876 fused_ordering(611) 00:10:51.876 fused_ordering(612) 00:10:51.876 fused_ordering(613) 00:10:51.876 fused_ordering(614) 00:10:51.876 fused_ordering(615) 00:10:52.444 fused_ordering(616) 00:10:52.444 fused_ordering(617) 00:10:52.444 fused_ordering(618) 00:10:52.444 fused_ordering(619) 00:10:52.444 fused_ordering(620) 00:10:52.444 fused_ordering(621) 00:10:52.444 fused_ordering(622) 00:10:52.444 fused_ordering(623) 00:10:52.444 fused_ordering(624) 00:10:52.444 fused_ordering(625) 00:10:52.444 fused_ordering(626) 00:10:52.444 fused_ordering(627) 00:10:52.444 fused_ordering(628) 00:10:52.444 fused_ordering(629) 00:10:52.444 fused_ordering(630) 00:10:52.444 fused_ordering(631) 00:10:52.444 fused_ordering(632) 00:10:52.444 fused_ordering(633) 00:10:52.444 fused_ordering(634) 00:10:52.444 fused_ordering(635) 00:10:52.444 fused_ordering(636) 00:10:52.444 fused_ordering(637) 00:10:52.444 fused_ordering(638) 00:10:52.444 fused_ordering(639) 00:10:52.444 fused_ordering(640) 00:10:52.444 fused_ordering(641) 00:10:52.444 fused_ordering(642) 00:10:52.444 fused_ordering(643) 00:10:52.444 fused_ordering(644) 00:10:52.444 fused_ordering(645) 00:10:52.444 fused_ordering(646) 00:10:52.444 fused_ordering(647) 00:10:52.444 fused_ordering(648) 00:10:52.444 fused_ordering(649) 00:10:52.444 fused_ordering(650) 00:10:52.444 fused_ordering(651) 00:10:52.444 fused_ordering(652) 00:10:52.444 fused_ordering(653) 00:10:52.444 fused_ordering(654) 00:10:52.444 fused_ordering(655) 00:10:52.444 fused_ordering(656) 00:10:52.444 fused_ordering(657) 00:10:52.444 fused_ordering(658) 00:10:52.444 fused_ordering(659) 00:10:52.444 fused_ordering(660) 00:10:52.444 fused_ordering(661) 00:10:52.444 fused_ordering(662) 00:10:52.444 fused_ordering(663) 00:10:52.444 fused_ordering(664) 00:10:52.444 fused_ordering(665) 00:10:52.444 fused_ordering(666) 00:10:52.444 fused_ordering(667) 00:10:52.444 fused_ordering(668) 00:10:52.444 fused_ordering(669) 00:10:52.444 fused_ordering(670) 00:10:52.444 fused_ordering(671) 00:10:52.444 fused_ordering(672) 00:10:52.444 fused_ordering(673) 00:10:52.444 fused_ordering(674) 00:10:52.444 fused_ordering(675) 00:10:52.444 fused_ordering(676) 00:10:52.444 fused_ordering(677) 00:10:52.444 fused_ordering(678) 00:10:52.444 fused_ordering(679) 00:10:52.444 fused_ordering(680) 00:10:52.444 fused_ordering(681) 00:10:52.444 fused_ordering(682) 00:10:52.444 fused_ordering(683) 00:10:52.444 fused_ordering(684) 00:10:52.444 fused_ordering(685) 00:10:52.444 fused_ordering(686) 00:10:52.444 fused_ordering(687) 00:10:52.444 fused_ordering(688) 00:10:52.444 fused_ordering(689) 00:10:52.444 fused_ordering(690) 00:10:52.444 fused_ordering(691) 00:10:52.444 fused_ordering(692) 00:10:52.444 fused_ordering(693) 00:10:52.444 fused_ordering(694) 00:10:52.444 fused_ordering(695) 00:10:52.444 fused_ordering(696) 00:10:52.444 fused_ordering(697) 00:10:52.444 fused_ordering(698) 00:10:52.444 fused_ordering(699) 00:10:52.444 fused_ordering(700) 00:10:52.444 fused_ordering(701) 00:10:52.444 fused_ordering(702) 00:10:52.444 fused_ordering(703) 00:10:52.444 fused_ordering(704) 00:10:52.444 fused_ordering(705) 00:10:52.444 fused_ordering(706) 00:10:52.444 fused_ordering(707) 00:10:52.444 fused_ordering(708) 00:10:52.444 fused_ordering(709) 00:10:52.444 fused_ordering(710) 00:10:52.444 fused_ordering(711) 00:10:52.444 fused_ordering(712) 00:10:52.444 fused_ordering(713) 00:10:52.444 fused_ordering(714) 00:10:52.444 fused_ordering(715) 00:10:52.444 fused_ordering(716) 00:10:52.444 fused_ordering(717) 00:10:52.444 fused_ordering(718) 00:10:52.444 fused_ordering(719) 00:10:52.444 fused_ordering(720) 00:10:52.444 fused_ordering(721) 00:10:52.444 fused_ordering(722) 00:10:52.444 fused_ordering(723) 00:10:52.444 fused_ordering(724) 00:10:52.444 fused_ordering(725) 00:10:52.444 fused_ordering(726) 00:10:52.444 fused_ordering(727) 00:10:52.444 fused_ordering(728) 00:10:52.444 fused_ordering(729) 00:10:52.444 fused_ordering(730) 00:10:52.444 fused_ordering(731) 00:10:52.444 fused_ordering(732) 00:10:52.444 fused_ordering(733) 00:10:52.444 fused_ordering(734) 00:10:52.444 fused_ordering(735) 00:10:52.444 fused_ordering(736) 00:10:52.444 fused_ordering(737) 00:10:52.444 fused_ordering(738) 00:10:52.444 fused_ordering(739) 00:10:52.444 fused_ordering(740) 00:10:52.444 fused_ordering(741) 00:10:52.444 fused_ordering(742) 00:10:52.444 fused_ordering(743) 00:10:52.444 fused_ordering(744) 00:10:52.444 fused_ordering(745) 00:10:52.444 fused_ordering(746) 00:10:52.444 fused_ordering(747) 00:10:52.444 fused_ordering(748) 00:10:52.444 fused_ordering(749) 00:10:52.444 fused_ordering(750) 00:10:52.444 fused_ordering(751) 00:10:52.444 fused_ordering(752) 00:10:52.444 fused_ordering(753) 00:10:52.444 fused_ordering(754) 00:10:52.444 fused_ordering(755) 00:10:52.444 fused_ordering(756) 00:10:52.444 fused_ordering(757) 00:10:52.444 fused_ordering(758) 00:10:52.444 fused_ordering(759) 00:10:52.444 fused_ordering(760) 00:10:52.444 fused_ordering(761) 00:10:52.444 fused_ordering(762) 00:10:52.444 fused_ordering(763) 00:10:52.444 fused_ordering(764) 00:10:52.444 fused_ordering(765) 00:10:52.444 fused_ordering(766) 00:10:52.444 fused_ordering(767) 00:10:52.444 fused_ordering(768) 00:10:52.444 fused_ordering(769) 00:10:52.444 fused_ordering(770) 00:10:52.444 fused_ordering(771) 00:10:52.444 fused_ordering(772) 00:10:52.444 fused_ordering(773) 00:10:52.444 fused_ordering(774) 00:10:52.444 fused_ordering(775) 00:10:52.444 fused_ordering(776) 00:10:52.444 fused_ordering(777) 00:10:52.444 fused_ordering(778) 00:10:52.444 fused_ordering(779) 00:10:52.444 fused_ordering(780) 00:10:52.444 fused_ordering(781) 00:10:52.444 fused_ordering(782) 00:10:52.444 fused_ordering(783) 00:10:52.444 fused_ordering(784) 00:10:52.444 fused_ordering(785) 00:10:52.444 fused_ordering(786) 00:10:52.444 fused_ordering(787) 00:10:52.444 fused_ordering(788) 00:10:52.444 fused_ordering(789) 00:10:52.444 fused_ordering(790) 00:10:52.444 fused_ordering(791) 00:10:52.444 fused_ordering(792) 00:10:52.444 fused_ordering(793) 00:10:52.444 fused_ordering(794) 00:10:52.444 fused_ordering(795) 00:10:52.444 fused_ordering(796) 00:10:52.444 fused_ordering(797) 00:10:52.444 fused_ordering(798) 00:10:52.444 fused_ordering(799) 00:10:52.444 fused_ordering(800) 00:10:52.444 fused_ordering(801) 00:10:52.444 fused_ordering(802) 00:10:52.444 fused_ordering(803) 00:10:52.444 fused_ordering(804) 00:10:52.444 fused_ordering(805) 00:10:52.444 fused_ordering(806) 00:10:52.444 fused_ordering(807) 00:10:52.444 fused_ordering(808) 00:10:52.444 fused_ordering(809) 00:10:52.444 fused_ordering(810) 00:10:52.444 fused_ordering(811) 00:10:52.444 fused_ordering(812) 00:10:52.444 fused_ordering(813) 00:10:52.444 fused_ordering(814) 00:10:52.444 fused_ordering(815) 00:10:52.444 fused_ordering(816) 00:10:52.444 fused_ordering(817) 00:10:52.444 fused_ordering(818) 00:10:52.444 fused_ordering(819) 00:10:52.444 fused_ordering(820) 00:10:53.012 fused_ordering(821) 00:10:53.012 fused_ordering(822) 00:10:53.012 fused_ordering(823) 00:10:53.012 fused_ordering(824) 00:10:53.012 fused_ordering(825) 00:10:53.012 fused_ordering(826) 00:10:53.012 fused_ordering(827) 00:10:53.012 fused_ordering(828) 00:10:53.012 fused_ordering(829) 00:10:53.012 fused_ordering(830) 00:10:53.012 fused_ordering(831) 00:10:53.012 fused_ordering(832) 00:10:53.012 fused_ordering(833) 00:10:53.012 fused_ordering(834) 00:10:53.012 fused_ordering(835) 00:10:53.012 fused_ordering(836) 00:10:53.012 fused_ordering(837) 00:10:53.012 fused_ordering(838) 00:10:53.012 fused_ordering(839) 00:10:53.012 fused_ordering(840) 00:10:53.012 fused_ordering(841) 00:10:53.012 fused_ordering(842) 00:10:53.012 fused_ordering(843) 00:10:53.012 fused_ordering(844) 00:10:53.012 fused_ordering(845) 00:10:53.012 fused_ordering(846) 00:10:53.012 fused_ordering(847) 00:10:53.012 fused_ordering(848) 00:10:53.012 fused_ordering(849) 00:10:53.012 fused_ordering(850) 00:10:53.012 fused_ordering(851) 00:10:53.012 fused_ordering(852) 00:10:53.012 fused_ordering(853) 00:10:53.012 fused_ordering(854) 00:10:53.012 fused_ordering(855) 00:10:53.012 fused_ordering(856) 00:10:53.012 fused_ordering(857) 00:10:53.012 fused_ordering(858) 00:10:53.012 fused_ordering(859) 00:10:53.012 fused_ordering(860) 00:10:53.012 fused_ordering(861) 00:10:53.012 fused_ordering(862) 00:10:53.012 fused_ordering(863) 00:10:53.012 fused_ordering(864) 00:10:53.012 fused_ordering(865) 00:10:53.012 fused_ordering(866) 00:10:53.012 fused_ordering(867) 00:10:53.012 fused_ordering(868) 00:10:53.012 fused_ordering(869) 00:10:53.012 fused_ordering(870) 00:10:53.012 fused_ordering(871) 00:10:53.012 fused_ordering(872) 00:10:53.012 fused_ordering(873) 00:10:53.012 fused_ordering(874) 00:10:53.012 fused_ordering(875) 00:10:53.012 fused_ordering(876) 00:10:53.012 fused_ordering(877) 00:10:53.012 fused_ordering(878) 00:10:53.012 fused_ordering(879) 00:10:53.012 fused_ordering(880) 00:10:53.012 fused_ordering(881) 00:10:53.012 fused_ordering(882) 00:10:53.012 fused_ordering(883) 00:10:53.012 fused_ordering(884) 00:10:53.012 fused_ordering(885) 00:10:53.012 fused_ordering(886) 00:10:53.012 fused_ordering(887) 00:10:53.012 fused_ordering(888) 00:10:53.012 fused_ordering(889) 00:10:53.012 fused_ordering(890) 00:10:53.012 fused_ordering(891) 00:10:53.012 fused_ordering(892) 00:10:53.012 fused_ordering(893) 00:10:53.012 fused_ordering(894) 00:10:53.012 fused_ordering(895) 00:10:53.012 fused_ordering(896) 00:10:53.012 fused_ordering(897) 00:10:53.012 fused_ordering(898) 00:10:53.012 fused_ordering(899) 00:10:53.012 fused_ordering(900) 00:10:53.012 fused_ordering(901) 00:10:53.013 fused_ordering(902) 00:10:53.013 fused_ordering(903) 00:10:53.013 fused_ordering(904) 00:10:53.013 fused_ordering(905) 00:10:53.013 fused_ordering(906) 00:10:53.013 fused_ordering(907) 00:10:53.013 fused_ordering(908) 00:10:53.013 fused_ordering(909) 00:10:53.013 fused_ordering(910) 00:10:53.013 fused_ordering(911) 00:10:53.013 fused_ordering(912) 00:10:53.013 fused_ordering(913) 00:10:53.013 fused_ordering(914) 00:10:53.013 fused_ordering(915) 00:10:53.013 fused_ordering(916) 00:10:53.013 fused_ordering(917) 00:10:53.013 fused_ordering(918) 00:10:53.013 fused_ordering(919) 00:10:53.013 fused_ordering(920) 00:10:53.013 fused_ordering(921) 00:10:53.013 fused_ordering(922) 00:10:53.013 fused_ordering(923) 00:10:53.013 fused_ordering(924) 00:10:53.013 fused_ordering(925) 00:10:53.013 fused_ordering(926) 00:10:53.013 fused_ordering(927) 00:10:53.013 fused_ordering(928) 00:10:53.013 fused_ordering(929) 00:10:53.013 fused_ordering(930) 00:10:53.013 fused_ordering(931) 00:10:53.013 fused_ordering(932) 00:10:53.013 fused_ordering(933) 00:10:53.013 fused_ordering(934) 00:10:53.013 fused_ordering(935) 00:10:53.013 fused_ordering(936) 00:10:53.013 fused_ordering(937) 00:10:53.013 fused_ordering(938) 00:10:53.013 fused_ordering(939) 00:10:53.013 fused_ordering(940) 00:10:53.013 fused_ordering(941) 00:10:53.013 fused_ordering(942) 00:10:53.013 fused_ordering(943) 00:10:53.013 fused_ordering(944) 00:10:53.013 fused_ordering(945) 00:10:53.013 fused_ordering(946) 00:10:53.013 fused_ordering(947) 00:10:53.013 fused_ordering(948) 00:10:53.013 fused_ordering(949) 00:10:53.013 fused_ordering(950) 00:10:53.013 fused_ordering(951) 00:10:53.013 fused_ordering(952) 00:10:53.013 fused_ordering(953) 00:10:53.013 fused_ordering(954) 00:10:53.013 fused_ordering(955) 00:10:53.013 fused_ordering(956) 00:10:53.013 fused_ordering(957) 00:10:53.013 fused_ordering(958) 00:10:53.013 fused_ordering(959) 00:10:53.013 fused_ordering(960) 00:10:53.013 fused_ordering(961) 00:10:53.013 fused_ordering(962) 00:10:53.013 fused_ordering(963) 00:10:53.013 fused_ordering(964) 00:10:53.013 fused_ordering(965) 00:10:53.013 fused_ordering(966) 00:10:53.013 fused_ordering(967) 00:10:53.013 fused_ordering(968) 00:10:53.013 fused_ordering(969) 00:10:53.013 fused_ordering(970) 00:10:53.013 fused_ordering(971) 00:10:53.013 fused_ordering(972) 00:10:53.013 fused_ordering(973) 00:10:53.013 fused_ordering(974) 00:10:53.013 fused_ordering(975) 00:10:53.013 fused_ordering(976) 00:10:53.013 fused_ordering(977) 00:10:53.013 fused_ordering(978) 00:10:53.013 fused_ordering(979) 00:10:53.013 fused_ordering(980) 00:10:53.013 fused_ordering(981) 00:10:53.013 fused_ordering(982) 00:10:53.013 fused_ordering(983) 00:10:53.013 fused_ordering(984) 00:10:53.013 fused_ordering(985) 00:10:53.013 fused_ordering(986) 00:10:53.013 fused_ordering(987) 00:10:53.013 fused_ordering(988) 00:10:53.013 fused_ordering(989) 00:10:53.013 fused_ordering(990) 00:10:53.013 fused_ordering(991) 00:10:53.013 fused_ordering(992) 00:10:53.013 fused_ordering(993) 00:10:53.013 fused_ordering(994) 00:10:53.013 fused_ordering(995) 00:10:53.013 fused_ordering(996) 00:10:53.013 fused_ordering(997) 00:10:53.013 fused_ordering(998) 00:10:53.013 fused_ordering(999) 00:10:53.013 fused_ordering(1000) 00:10:53.013 fused_ordering(1001) 00:10:53.013 fused_ordering(1002) 00:10:53.013 fused_ordering(1003) 00:10:53.013 fused_ordering(1004) 00:10:53.013 fused_ordering(1005) 00:10:53.013 fused_ordering(1006) 00:10:53.013 fused_ordering(1007) 00:10:53.013 fused_ordering(1008) 00:10:53.013 fused_ordering(1009) 00:10:53.013 fused_ordering(1010) 00:10:53.013 fused_ordering(1011) 00:10:53.013 fused_ordering(1012) 00:10:53.013 fused_ordering(1013) 00:10:53.013 fused_ordering(1014) 00:10:53.013 fused_ordering(1015) 00:10:53.013 fused_ordering(1016) 00:10:53.013 fused_ordering(1017) 00:10:53.013 fused_ordering(1018) 00:10:53.013 fused_ordering(1019) 00:10:53.013 fused_ordering(1020) 00:10:53.013 fused_ordering(1021) 00:10:53.013 fused_ordering(1022) 00:10:53.013 fused_ordering(1023) 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.013 rmmod nvme_tcp 00:10:53.013 rmmod nvme_fabrics 00:10:53.013 rmmod nvme_keyring 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2683197 ']' 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2683197 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2683197 ']' 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2683197 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.013 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683197 00:10:53.273 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:53.273 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:53.273 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683197' 00:10:53.273 killing process with pid 2683197 00:10:53.273 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2683197 00:10:53.273 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2683197 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.533 11:26:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.436 11:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.436 00:10:55.436 real 0m12.215s 00:10:55.436 user 0m7.272s 00:10:55.436 sys 0m6.228s 00:10:55.436 11:26:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.436 11:26:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:55.436 ************************************ 00:10:55.436 END TEST nvmf_fused_ordering 00:10:55.436 ************************************ 00:10:55.436 11:26:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:55.436 11:26:29 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:55.436 11:26:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.436 11:26:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.436 11:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.696 ************************************ 00:10:55.696 START TEST nvmf_delete_subsystem 00:10:55.696 ************************************ 00:10:55.696 11:26:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:55.696 * Looking for test storage... 00:10:55.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.696 11:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.696 11:26:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.696 11:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:02.347 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.347 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:02.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:02.348 Found net devices under 0000:af:00.0: cvl_0_0 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:02.348 Found net devices under 0000:af:00.1: cvl_0_1 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:11:02.348 00:11:02.348 --- 10.0.0.2 ping statistics --- 00:11:02.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.348 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:02.348 00:11:02.348 --- 10.0.0.1 ping statistics --- 00:11:02.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.348 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2687544 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2687544 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2687544 ']' 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.348 11:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.348 [2024-07-15 11:26:35.958134] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:02.348 [2024-07-15 11:26:35.958189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.348 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.348 [2024-07-15 11:26:36.044753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:02.348 [2024-07-15 11:26:36.135915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.348 [2024-07-15 11:26:36.135957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.348 [2024-07-15 11:26:36.135967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.348 [2024-07-15 11:26:36.135976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.348 [2024-07-15 11:26:36.135983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.348 [2024-07-15 11:26:36.136028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.348 [2024-07-15 11:26:36.136033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.636 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 [2024-07-15 11:26:36.954616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 [2024-07-15 11:26:36.975125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 NULL1 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 Delay0 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.637 11:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 11:26:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.637 11:26:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2687794 00:11:02.637 11:26:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:02.637 11:26:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:02.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.637 [2024-07-15 11:26:37.086311] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:05.172 11:26:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.172 11:26:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.172 11:26:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 [2024-07-15 11:26:39.386791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bae90 is same with the state(5) to be set 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 starting I/O failed: -6 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 [2024-07-15 11:26:39.387859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fee4c000c00 is same with the state(5) to be set 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.172 Read completed with error (sct=0, sc=8) 00:11:05.172 Write completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Write completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:05.173 Read completed with error (sct=0, sc=8) 00:11:06.110 [2024-07-15 11:26:40.347549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399500 is same with the state(5) to be set 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 [2024-07-15 11:26:40.388669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bd650 is same with the state(5) to be set 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 [2024-07-15 11:26:40.389162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bacb0 is same with the state(5) to be set 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 [2024-07-15 11:26:40.389474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fee4c00d2f0 is same with the state(5) to be set 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Write completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 Read completed with error (sct=0, sc=8) 00:11:06.110 [2024-07-15 11:26:40.389946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9d00 is same with the state(5) to be set 00:11:06.110 Initializing NVMe Controllers 00:11:06.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.110 Controller IO queue size 128, less than required. 00:11:06.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:06.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:06.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:06.111 Initialization complete. Launching workers. 00:11:06.111 ======================================================== 00:11:06.111 Latency(us) 00:11:06.111 Device Information : IOPS MiB/s Average min max 00:11:06.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.39 0.09 950087.53 1455.23 1018810.41 00:11:06.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.26 0.08 871025.05 726.77 1020555.45 00:11:06.111 ======================================================== 00:11:06.111 Total : 347.65 0.17 914323.85 726.77 1020555.45 00:11:06.111 00:11:06.111 [2024-07-15 11:26:40.391238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2399500 (9): Bad file descriptor 00:11:06.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:06.111 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.111 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:06.111 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2687794 00:11:06.111 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2687794 00:11:06.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2687794) - No such process 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2687794 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2687794 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2687794 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 [2024-07-15 11:26:40.921468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2688366 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:06.679 11:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.679 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.679 [2024-07-15 11:26:41.002280] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:07.247 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.247 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:07.247 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.506 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.506 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:07.506 11:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.074 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.074 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:08.074 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:08.642 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.643 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:08.643 11:26:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.210 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.210 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:09.210 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:09.779 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:09.779 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:09.779 11:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:10.038 Initializing NVMe Controllers 00:11:10.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.038 Controller IO queue size 128, less than required. 00:11:10.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:10.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:10.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:10.038 Initialization complete. Launching workers. 00:11:10.038 ======================================================== 00:11:10.038 Latency(us) 00:11:10.038 Device Information : IOPS MiB/s Average min max 00:11:10.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005446.63 1000216.48 1019683.24 00:11:10.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006432.84 1000222.44 1019555.12 00:11:10.038 ======================================================== 00:11:10.038 Total : 256.00 0.12 1005939.73 1000216.48 1019683.24 00:11:10.038 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2688366 00:11:10.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2688366) - No such process 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2688366 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.038 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.038 rmmod nvme_tcp 00:11:10.038 rmmod nvme_fabrics 00:11:10.297 rmmod nvme_keyring 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2687544 ']' 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2687544 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2687544 ']' 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2687544 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687544 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687544' 00:11:10.297 killing process with pid 2687544 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2687544 00:11:10.297 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2687544 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.556 11:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.464 11:26:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.464 00:11:12.464 real 0m16.946s 00:11:12.464 user 0m31.410s 00:11:12.464 sys 0m5.446s 00:11:12.464 11:26:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.464 11:26:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.464 ************************************ 00:11:12.464 END TEST nvmf_delete_subsystem 00:11:12.464 ************************************ 00:11:12.464 11:26:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.464 11:26:46 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:12.464 11:26:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.464 11:26:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.464 11:26:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.464 ************************************ 00:11:12.464 START TEST nvmf_ns_masking 00:11:12.464 ************************************ 00:11:12.464 11:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:12.723 * Looking for test storage... 00:11:12.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.723 11:26:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=165198db-5dcb-410f-bc31-8eb6f1c48552 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5be503c0-5395-49a2-9504-cf42331433af 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=65c11e93-7b9e-41bf-9520-f08382da2308 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.724 11:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:19.294 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:19.294 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.294 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:19.295 Found net devices under 0000:af:00.0: cvl_0_0 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:19.295 Found net devices under 0000:af:00.1: cvl_0_1 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:11:19.295 00:11:19.295 --- 10.0.0.2 ping statistics --- 00:11:19.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.295 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:19.295 00:11:19.295 --- 10.0.0.1 ping statistics --- 00:11:19.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.295 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2692684 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2692684 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2692684 ']' 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.295 11:26:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [2024-07-15 11:26:52.972423] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:19.295 [2024-07-15 11:26:52.972483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.295 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.295 [2024-07-15 11:26:53.061231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.295 [2024-07-15 11:26:53.153259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.295 [2024-07-15 11:26:53.153298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.295 [2024-07-15 11:26:53.153309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.295 [2024-07-15 11:26:53.153318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.295 [2024-07-15 11:26:53.153325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.295 [2024-07-15 11:26:53.153345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.554 11:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.812 [2024-07-15 11:26:54.179584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.812 11:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:19.812 11:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:19.812 11:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:20.069 Malloc1 00:11:20.069 11:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:20.328 Malloc2 00:11:20.328 11:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.586 11:26:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:20.845 11:26:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.103 [2024-07-15 11:26:55.475466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.103 11:26:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:21.103 11:26:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c11e93-7b9e-41bf-9520-f08382da2308 -a 10.0.0.2 -s 4420 -i 4 00:11:21.361 11:26:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.361 11:26:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.361 11:26:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.361 11:26:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.361 11:26:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:23.264 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:23.521 [ 0]:0x1 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=77c73b21a0d44761a52d3e2f33459378 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 77c73b21a0d44761a52d3e2f33459378 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.521 11:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:23.779 [ 0]:0x1 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=77c73b21a0d44761a52d3e2f33459378 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 77c73b21a0d44761a52d3e2f33459378 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:23.779 [ 1]:0x2 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:23.779 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.037 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.037 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:24.295 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:24.295 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c11e93-7b9e-41bf-9520-f08382da2308 -a 10.0.0.2 -s 4420 -i 4 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:24.553 11:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:27.083 11:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.083 [ 0]:0x2 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.083 [ 0]:0x1 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=77c73b21a0d44761a52d3e2f33459378 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 77c73b21a0d44761a52d3e2f33459378 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.083 [ 1]:0x2 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.083 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:27.342 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.600 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:27.600 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:27.601 [ 0]:0x2 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.601 11:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.860 11:27:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:27.860 11:27:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c11e93-7b9e-41bf-9520-f08382da2308 -a 10.0.0.2 -s 4420 -i 4 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:28.119 11:27:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:30.021 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:30.279 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:30.280 [ 0]:0x1 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=77c73b21a0d44761a52d3e2f33459378 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 77c73b21a0d44761a52d3e2f33459378 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:30.280 [ 1]:0x2 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.280 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:30.539 [ 0]:0x2 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:30.539 11:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:30.798 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:30.798 [2024-07-15 11:27:05.249156] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:30.798 request: 00:11:30.798 { 00:11:30.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.798 "nsid": 2, 00:11:30.798 "host": "nqn.2016-06.io.spdk:host1", 00:11:30.798 "method": "nvmf_ns_remove_host", 00:11:30.798 "req_id": 1 00:11:30.798 } 00:11:30.798 Got JSON-RPC error response 00:11:30.798 response: 00:11:30.798 { 00:11:30.798 "code": -32602, 00:11:30.798 "message": "Invalid parameters" 00:11:30.798 } 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.057 [ 0]:0x2 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.057 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aee4a5f8a9754779bb86ba85490f5232 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aee4a5f8a9754779bb86ba85490f5232 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2695263 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2695263 /var/tmp/host.sock 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2695263 ']' 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:31.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.058 11:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.058 [2024-07-15 11:27:05.505679] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:31.058 [2024-07-15 11:27:05.505737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695263 ] 00:11:31.316 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.316 [2024-07-15 11:27:05.586324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.316 [2024-07-15 11:27:05.690246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.255 11:27:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.255 11:27:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:32.255 11:27:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.513 11:27:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.773 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 165198db-5dcb-410f-bc31-8eb6f1c48552 00:11:32.773 11:27:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:32.773 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 165198DB5DCB410FBC318EB6F1C48552 -i 00:11:33.032 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5be503c0-5395-49a2-9504-cf42331433af 00:11:33.032 11:27:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:33.032 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5BE503C0539549A29504CF42331433AF -i 00:11:33.292 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.551 11:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:33.810 11:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:33.810 11:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:34.377 nvme0n1 00:11:34.377 11:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:34.377 11:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:34.637 nvme1n2 00:11:34.637 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:34.637 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:34.637 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:34.637 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:34.637 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:34.896 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:34.896 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:34.896 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:34.896 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:35.156 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 165198db-5dcb-410f-bc31-8eb6f1c48552 == \1\6\5\1\9\8\d\b\-\5\d\c\b\-\4\1\0\f\-\b\c\3\1\-\8\e\b\6\f\1\c\4\8\5\5\2 ]] 00:11:35.156 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:35.156 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:35.156 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5be503c0-5395-49a2-9504-cf42331433af == \5\b\e\5\0\3\c\0\-\5\3\9\5\-\4\9\a\2\-\9\5\0\4\-\c\f\4\2\3\3\1\4\3\3\a\f ]] 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2695263 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2695263 ']' 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2695263 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2695263 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2695263' 00:11:35.724 killing process with pid 2695263 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2695263 00:11:35.724 11:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2695263 00:11:35.983 11:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.243 rmmod nvme_tcp 00:11:36.243 rmmod nvme_fabrics 00:11:36.243 rmmod nvme_keyring 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2692684 ']' 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2692684 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2692684 ']' 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2692684 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2692684 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2692684' 00:11:36.243 killing process with pid 2692684 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2692684 00:11:36.243 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2692684 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.503 11:27:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.135 11:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.135 00:11:39.135 real 0m26.043s 00:11:39.135 user 0m31.249s 00:11:39.135 sys 0m6.917s 00:11:39.135 11:27:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.135 11:27:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.135 ************************************ 00:11:39.135 END TEST nvmf_ns_masking 00:11:39.135 ************************************ 00:11:39.135 11:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.135 11:27:12 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:39.135 11:27:12 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:39.135 11:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.135 11:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.135 11:27:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.135 ************************************ 00:11:39.135 START TEST nvmf_nvme_cli 00:11:39.135 ************************************ 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:39.135 * Looking for test storage... 00:11:39.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.135 11:27:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:44.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:44.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:44.412 Found net devices under 0000:af:00.0: cvl_0_0 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:44.412 Found net devices under 0000:af:00.1: cvl_0_1 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.412 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.671 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.671 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.671 11:27:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:11:44.671 00:11:44.671 --- 10.0.0.2 ping statistics --- 00:11:44.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.671 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:44.671 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:11:44.930 00:11:44.930 --- 10.0.0.1 ping statistics --- 00:11:44.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.930 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2700184 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2700184 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2700184 ']' 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.930 11:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:44.930 [2024-07-15 11:27:19.235912] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:44.930 [2024-07-15 11:27:19.235968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.930 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.930 [2024-07-15 11:27:19.323263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.189 [2024-07-15 11:27:19.412674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.189 [2024-07-15 11:27:19.412719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.189 [2024-07-15 11:27:19.412729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.189 [2024-07-15 11:27:19.412739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.189 [2024-07-15 11:27:19.412746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.189 [2024-07-15 11:27:19.412849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.189 [2024-07-15 11:27:19.412963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.189 [2024-07-15 11:27:19.413052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.189 [2024-07-15 11:27:19.413053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.756 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.015 [2024-07-15 11:27:20.222595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.015 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.015 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:46.015 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.015 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.015 Malloc0 00:11:46.015 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 Malloc1 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 [2024-07-15 11:27:20.312885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:46.016 00:11:46.016 Discovery Log Number of Records 2, Generation counter 2 00:11:46.016 =====Discovery Log Entry 0====== 00:11:46.016 trtype: tcp 00:11:46.016 adrfam: ipv4 00:11:46.016 subtype: current discovery subsystem 00:11:46.016 treq: not required 00:11:46.016 portid: 0 00:11:46.016 trsvcid: 4420 00:11:46.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:46.016 traddr: 10.0.0.2 00:11:46.016 eflags: explicit discovery connections, duplicate discovery information 00:11:46.016 sectype: none 00:11:46.016 =====Discovery Log Entry 1====== 00:11:46.016 trtype: tcp 00:11:46.016 adrfam: ipv4 00:11:46.016 subtype: nvme subsystem 00:11:46.016 treq: not required 00:11:46.016 portid: 0 00:11:46.016 trsvcid: 4420 00:11:46.016 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:46.016 traddr: 10.0.0.2 00:11:46.016 eflags: none 00:11:46.016 sectype: none 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:46.016 11:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:47.392 11:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:49.927 /dev/nvme0n1 ]] 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.927 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.927 rmmod nvme_tcp 00:11:49.927 rmmod nvme_fabrics 00:11:49.927 rmmod nvme_keyring 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2700184 ']' 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2700184 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2700184 ']' 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2700184 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2700184 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2700184' 00:11:50.187 killing process with pid 2700184 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2700184 00:11:50.187 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2700184 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.445 11:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.348 11:27:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.348 00:11:52.348 real 0m13.754s 00:11:52.348 user 0m22.784s 00:11:52.348 sys 0m5.212s 00:11:52.348 11:27:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.348 11:27:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.348 ************************************ 00:11:52.348 END TEST nvmf_nvme_cli 00:11:52.348 ************************************ 00:11:52.607 11:27:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:52.607 11:27:26 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:52.607 11:27:26 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:52.607 11:27:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.607 11:27:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.607 11:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.607 ************************************ 00:11:52.607 START TEST nvmf_vfio_user 00:11:52.607 ************************************ 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:52.607 * Looking for test storage... 00:11:52.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:52.607 11:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2701751 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2701751' 00:11:52.607 Process pid: 2701751 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2701751 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2701751 ']' 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.607 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:52.607 [2024-07-15 11:27:27.054274] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:52.607 [2024-07-15 11:27:27.054338] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.866 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.866 [2024-07-15 11:27:27.137976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.866 [2024-07-15 11:27:27.228615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.866 [2024-07-15 11:27:27.228658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.866 [2024-07-15 11:27:27.228668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.866 [2024-07-15 11:27:27.228682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.866 [2024-07-15 11:27:27.228690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.866 [2024-07-15 11:27:27.228753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.866 [2024-07-15 11:27:27.228865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.866 [2024-07-15 11:27:27.228976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.866 [2024-07-15 11:27:27.228976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.125 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.125 11:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:53.125 11:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:54.061 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:54.319 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:54.319 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:54.319 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:54.319 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:54.319 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:54.578 Malloc1 00:11:54.578 11:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:54.837 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:55.095 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:55.353 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:55.353 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:55.353 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:55.611 Malloc2 00:11:55.611 11:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:55.868 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:56.128 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:56.389 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:56.389 [2024-07-15 11:27:30.702683] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:11:56.389 [2024-07-15 11:27:30.702718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2702435 ] 00:11:56.389 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.389 [2024-07-15 11:27:30.740767] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:56.389 [2024-07-15 11:27:30.748787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:56.389 [2024-07-15 11:27:30.748813] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5bfbc67000 00:11:56.389 [2024-07-15 11:27:30.749771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.750771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.751773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.752776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.753788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.754793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.755803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.756818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.389 [2024-07-15 11:27:30.757833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:56.389 [2024-07-15 11:27:30.757848] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5bfbc5c000 00:11:56.389 [2024-07-15 11:27:30.759268] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:56.389 [2024-07-15 11:27:30.776732] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:56.389 [2024-07-15 11:27:30.776763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:56.389 [2024-07-15 11:27:30.782049] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:56.389 [2024-07-15 11:27:30.782103] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:56.389 [2024-07-15 11:27:30.782201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:56.389 [2024-07-15 11:27:30.782222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:56.389 [2024-07-15 11:27:30.782230] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:56.389 [2024-07-15 11:27:30.783054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:56.389 [2024-07-15 11:27:30.783066] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:56.389 [2024-07-15 11:27:30.783080] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:56.389 [2024-07-15 11:27:30.784059] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:56.389 [2024-07-15 11:27:30.784071] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:56.389 [2024-07-15 11:27:30.784080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.785062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:56.389 [2024-07-15 11:27:30.785074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.786068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:56.389 [2024-07-15 11:27:30.786078] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:56.389 [2024-07-15 11:27:30.786084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.786093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.786200] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:56.389 [2024-07-15 11:27:30.786206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.786213] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:56.389 [2024-07-15 11:27:30.787081] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:56.389 [2024-07-15 11:27:30.788080] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:56.389 [2024-07-15 11:27:30.789092] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:56.389 [2024-07-15 11:27:30.790086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.389 [2024-07-15 11:27:30.790206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:56.389 [2024-07-15 11:27:30.791109] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:56.389 [2024-07-15 11:27:30.791120] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:56.389 [2024-07-15 11:27:30.791126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:56.389 [2024-07-15 11:27:30.791151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:56.389 [2024-07-15 11:27:30.791161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:56.389 [2024-07-15 11:27:30.791179] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.389 [2024-07-15 11:27:30.791186] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.389 [2024-07-15 11:27:30.791204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.389 [2024-07-15 11:27:30.791281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:56.389 [2024-07-15 11:27:30.791294] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:56.389 [2024-07-15 11:27:30.791303] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:56.389 [2024-07-15 11:27:30.791309] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:56.389 [2024-07-15 11:27:30.791315] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:56.389 [2024-07-15 11:27:30.791322] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:56.389 [2024-07-15 11:27:30.791327] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:56.389 [2024-07-15 11:27:30.791333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.390 [2024-07-15 11:27:30.791402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.390 [2024-07-15 11:27:30.791413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.390 [2024-07-15 11:27:30.791423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.390 [2024-07-15 11:27:30.791429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791477] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:56.390 [2024-07-15 11:27:30.791484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791616] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:56.390 [2024-07-15 11:27:30.791622] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:56.390 [2024-07-15 11:27:30.791630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791667] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:56.390 [2024-07-15 11:27:30.791678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791696] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.390 [2024-07-15 11:27:30.791702] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.390 [2024-07-15 11:27:30.791710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791775] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.390 [2024-07-15 11:27:30.791781] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.390 [2024-07-15 11:27:30.791789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791870] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:56.390 [2024-07-15 11:27:30.791876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:56.390 [2024-07-15 11:27:30.791882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:56.390 [2024-07-15 11:27:30.791903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.791997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.792009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.792025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:56.390 [2024-07-15 11:27:30.792032] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:56.390 [2024-07-15 11:27:30.792036] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:56.390 [2024-07-15 11:27:30.792041] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:56.390 [2024-07-15 11:27:30.792048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:56.390 [2024-07-15 11:27:30.792057] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:56.390 [2024-07-15 11:27:30.792063] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:56.390 [2024-07-15 11:27:30.792071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.792080] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:56.390 [2024-07-15 11:27:30.792086] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.390 [2024-07-15 11:27:30.792093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.792102] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:56.390 [2024-07-15 11:27:30.792108] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:56.390 [2024-07-15 11:27:30.792116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:56.390 [2024-07-15 11:27:30.792125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.792142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.792156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:56.390 [2024-07-15 11:27:30.792169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:56.390 ===================================================== 00:11:56.390 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.390 ===================================================== 00:11:56.390 Controller Capabilities/Features 00:11:56.390 ================================ 00:11:56.390 Vendor ID: 4e58 00:11:56.390 Subsystem Vendor ID: 4e58 00:11:56.390 Serial Number: SPDK1 00:11:56.390 Model Number: SPDK bdev Controller 00:11:56.390 Firmware Version: 24.09 00:11:56.390 Recommended Arb Burst: 6 00:11:56.390 IEEE OUI Identifier: 8d 6b 50 00:11:56.390 Multi-path I/O 00:11:56.390 May have multiple subsystem ports: Yes 00:11:56.390 May have multiple controllers: Yes 00:11:56.390 Associated with SR-IOV VF: No 00:11:56.390 Max Data Transfer Size: 131072 00:11:56.390 Max Number of Namespaces: 32 00:11:56.390 Max Number of I/O Queues: 127 00:11:56.390 NVMe Specification Version (VS): 1.3 00:11:56.390 NVMe Specification Version (Identify): 1.3 00:11:56.390 Maximum Queue Entries: 256 00:11:56.390 Contiguous Queues Required: Yes 00:11:56.390 Arbitration Mechanisms Supported 00:11:56.390 Weighted Round Robin: Not Supported 00:11:56.390 Vendor Specific: Not Supported 00:11:56.390 Reset Timeout: 15000 ms 00:11:56.390 Doorbell Stride: 4 bytes 00:11:56.390 NVM Subsystem Reset: Not Supported 00:11:56.390 Command Sets Supported 00:11:56.390 NVM Command Set: Supported 00:11:56.390 Boot Partition: Not Supported 00:11:56.390 Memory Page Size Minimum: 4096 bytes 00:11:56.390 Memory Page Size Maximum: 4096 bytes 00:11:56.390 Persistent Memory Region: Not Supported 00:11:56.390 Optional Asynchronous Events Supported 00:11:56.390 Namespace Attribute Notices: Supported 00:11:56.391 Firmware Activation Notices: Not Supported 00:11:56.391 ANA Change Notices: Not Supported 00:11:56.391 PLE Aggregate Log Change Notices: Not Supported 00:11:56.391 LBA Status Info Alert Notices: Not Supported 00:11:56.391 EGE Aggregate Log Change Notices: Not Supported 00:11:56.391 Normal NVM Subsystem Shutdown event: Not Supported 00:11:56.391 Zone Descriptor Change Notices: Not Supported 00:11:56.391 Discovery Log Change Notices: Not Supported 00:11:56.391 Controller Attributes 00:11:56.391 128-bit Host Identifier: Supported 00:11:56.391 Non-Operational Permissive Mode: Not Supported 00:11:56.391 NVM Sets: Not Supported 00:11:56.391 Read Recovery Levels: Not Supported 00:11:56.391 Endurance Groups: Not Supported 00:11:56.391 Predictable Latency Mode: Not Supported 00:11:56.391 Traffic Based Keep ALive: Not Supported 00:11:56.391 Namespace Granularity: Not Supported 00:11:56.391 SQ Associations: Not Supported 00:11:56.391 UUID List: Not Supported 00:11:56.391 Multi-Domain Subsystem: Not Supported 00:11:56.391 Fixed Capacity Management: Not Supported 00:11:56.391 Variable Capacity Management: Not Supported 00:11:56.391 Delete Endurance Group: Not Supported 00:11:56.391 Delete NVM Set: Not Supported 00:11:56.391 Extended LBA Formats Supported: Not Supported 00:11:56.391 Flexible Data Placement Supported: Not Supported 00:11:56.391 00:11:56.391 Controller Memory Buffer Support 00:11:56.391 ================================ 00:11:56.391 Supported: No 00:11:56.391 00:11:56.391 Persistent Memory Region Support 00:11:56.391 ================================ 00:11:56.391 Supported: No 00:11:56.391 00:11:56.391 Admin Command Set Attributes 00:11:56.391 ============================ 00:11:56.391 Security Send/Receive: Not Supported 00:11:56.391 Format NVM: Not Supported 00:11:56.391 Firmware Activate/Download: Not Supported 00:11:56.391 Namespace Management: Not Supported 00:11:56.391 Device Self-Test: Not Supported 00:11:56.391 Directives: Not Supported 00:11:56.391 NVMe-MI: Not Supported 00:11:56.391 Virtualization Management: Not Supported 00:11:56.391 Doorbell Buffer Config: Not Supported 00:11:56.391 Get LBA Status Capability: Not Supported 00:11:56.391 Command & Feature Lockdown Capability: Not Supported 00:11:56.391 Abort Command Limit: 4 00:11:56.391 Async Event Request Limit: 4 00:11:56.391 Number of Firmware Slots: N/A 00:11:56.391 Firmware Slot 1 Read-Only: N/A 00:11:56.391 Firmware Activation Without Reset: N/A 00:11:56.391 Multiple Update Detection Support: N/A 00:11:56.391 Firmware Update Granularity: No Information Provided 00:11:56.391 Per-Namespace SMART Log: No 00:11:56.391 Asymmetric Namespace Access Log Page: Not Supported 00:11:56.391 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:56.391 Command Effects Log Page: Supported 00:11:56.391 Get Log Page Extended Data: Supported 00:11:56.391 Telemetry Log Pages: Not Supported 00:11:56.391 Persistent Event Log Pages: Not Supported 00:11:56.391 Supported Log Pages Log Page: May Support 00:11:56.391 Commands Supported & Effects Log Page: Not Supported 00:11:56.391 Feature Identifiers & Effects Log Page:May Support 00:11:56.391 NVMe-MI Commands & Effects Log Page: May Support 00:11:56.391 Data Area 4 for Telemetry Log: Not Supported 00:11:56.391 Error Log Page Entries Supported: 128 00:11:56.391 Keep Alive: Supported 00:11:56.391 Keep Alive Granularity: 10000 ms 00:11:56.391 00:11:56.391 NVM Command Set Attributes 00:11:56.391 ========================== 00:11:56.391 Submission Queue Entry Size 00:11:56.391 Max: 64 00:11:56.391 Min: 64 00:11:56.391 Completion Queue Entry Size 00:11:56.391 Max: 16 00:11:56.391 Min: 16 00:11:56.391 Number of Namespaces: 32 00:11:56.391 Compare Command: Supported 00:11:56.391 Write Uncorrectable Command: Not Supported 00:11:56.391 Dataset Management Command: Supported 00:11:56.391 Write Zeroes Command: Supported 00:11:56.391 Set Features Save Field: Not Supported 00:11:56.391 Reservations: Not Supported 00:11:56.391 Timestamp: Not Supported 00:11:56.391 Copy: Supported 00:11:56.391 Volatile Write Cache: Present 00:11:56.391 Atomic Write Unit (Normal): 1 00:11:56.391 Atomic Write Unit (PFail): 1 00:11:56.391 Atomic Compare & Write Unit: 1 00:11:56.391 Fused Compare & Write: Supported 00:11:56.391 Scatter-Gather List 00:11:56.391 SGL Command Set: Supported (Dword aligned) 00:11:56.391 SGL Keyed: Not Supported 00:11:56.391 SGL Bit Bucket Descriptor: Not Supported 00:11:56.391 SGL Metadata Pointer: Not Supported 00:11:56.391 Oversized SGL: Not Supported 00:11:56.391 SGL Metadata Address: Not Supported 00:11:56.391 SGL Offset: Not Supported 00:11:56.391 Transport SGL Data Block: Not Supported 00:11:56.391 Replay Protected Memory Block: Not Supported 00:11:56.391 00:11:56.391 Firmware Slot Information 00:11:56.391 ========================= 00:11:56.391 Active slot: 1 00:11:56.391 Slot 1 Firmware Revision: 24.09 00:11:56.391 00:11:56.391 00:11:56.391 Commands Supported and Effects 00:11:56.391 ============================== 00:11:56.391 Admin Commands 00:11:56.391 -------------- 00:11:56.391 Get Log Page (02h): Supported 00:11:56.391 Identify (06h): Supported 00:11:56.391 Abort (08h): Supported 00:11:56.391 Set Features (09h): Supported 00:11:56.391 Get Features (0Ah): Supported 00:11:56.391 Asynchronous Event Request (0Ch): Supported 00:11:56.391 Keep Alive (18h): Supported 00:11:56.391 I/O Commands 00:11:56.391 ------------ 00:11:56.391 Flush (00h): Supported LBA-Change 00:11:56.391 Write (01h): Supported LBA-Change 00:11:56.391 Read (02h): Supported 00:11:56.391 Compare (05h): Supported 00:11:56.391 Write Zeroes (08h): Supported LBA-Change 00:11:56.391 Dataset Management (09h): Supported LBA-Change 00:11:56.391 Copy (19h): Supported LBA-Change 00:11:56.391 00:11:56.391 Error Log 00:11:56.391 ========= 00:11:56.391 00:11:56.391 Arbitration 00:11:56.391 =========== 00:11:56.391 Arbitration Burst: 1 00:11:56.391 00:11:56.391 Power Management 00:11:56.391 ================ 00:11:56.391 Number of Power States: 1 00:11:56.391 Current Power State: Power State #0 00:11:56.391 Power State #0: 00:11:56.391 Max Power: 0.00 W 00:11:56.391 Non-Operational State: Operational 00:11:56.391 Entry Latency: Not Reported 00:11:56.391 Exit Latency: Not Reported 00:11:56.391 Relative Read Throughput: 0 00:11:56.391 Relative Read Latency: 0 00:11:56.391 Relative Write Throughput: 0 00:11:56.391 Relative Write Latency: 0 00:11:56.391 Idle Power: Not Reported 00:11:56.391 Active Power: Not Reported 00:11:56.391 Non-Operational Permissive Mode: Not Supported 00:11:56.391 00:11:56.391 Health Information 00:11:56.391 ================== 00:11:56.391 Critical Warnings: 00:11:56.391 Available Spare Space: OK 00:11:56.391 Temperature: OK 00:11:56.391 Device Reliability: OK 00:11:56.391 Read Only: No 00:11:56.391 Volatile Memory Backup: OK 00:11:56.391 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:56.391 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:56.391 Available Spare: 0% 00:11:56.391 Available Sp[2024-07-15 11:27:30.792295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:56.391 [2024-07-15 11:27:30.792313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:56.391 [2024-07-15 11:27:30.792349] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:56.391 [2024-07-15 11:27:30.792371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.391 [2024-07-15 11:27:30.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.391 [2024-07-15 11:27:30.792387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.391 [2024-07-15 11:27:30.792395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.391 [2024-07-15 11:27:30.796265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:56.391 [2024-07-15 11:27:30.796279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:56.391 [2024-07-15 11:27:30.797150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.391 [2024-07-15 11:27:30.797231] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:56.391 [2024-07-15 11:27:30.797240] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:56.391 [2024-07-15 11:27:30.798154] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:56.391 [2024-07-15 11:27:30.798168] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:56.391 [2024-07-15 11:27:30.798225] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:56.391 [2024-07-15 11:27:30.800212] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:56.391 are Threshold: 0% 00:11:56.391 Life Percentage Used: 0% 00:11:56.391 Data Units Read: 0 00:11:56.391 Data Units Written: 0 00:11:56.391 Host Read Commands: 0 00:11:56.391 Host Write Commands: 0 00:11:56.391 Controller Busy Time: 0 minutes 00:11:56.391 Power Cycles: 0 00:11:56.391 Power On Hours: 0 hours 00:11:56.391 Unsafe Shutdowns: 0 00:11:56.391 Unrecoverable Media Errors: 0 00:11:56.391 Lifetime Error Log Entries: 0 00:11:56.391 Warning Temperature Time: 0 minutes 00:11:56.391 Critical Temperature Time: 0 minutes 00:11:56.391 00:11:56.391 Number of Queues 00:11:56.391 ================ 00:11:56.391 Number of I/O Submission Queues: 127 00:11:56.392 Number of I/O Completion Queues: 127 00:11:56.392 00:11:56.392 Active Namespaces 00:11:56.392 ================= 00:11:56.392 Namespace ID:1 00:11:56.392 Error Recovery Timeout: Unlimited 00:11:56.392 Command Set Identifier: NVM (00h) 00:11:56.392 Deallocate: Supported 00:11:56.392 Deallocated/Unwritten Error: Not Supported 00:11:56.392 Deallocated Read Value: Unknown 00:11:56.392 Deallocate in Write Zeroes: Not Supported 00:11:56.392 Deallocated Guard Field: 0xFFFF 00:11:56.392 Flush: Supported 00:11:56.392 Reservation: Supported 00:11:56.392 Namespace Sharing Capabilities: Multiple Controllers 00:11:56.392 Size (in LBAs): 131072 (0GiB) 00:11:56.392 Capacity (in LBAs): 131072 (0GiB) 00:11:56.392 Utilization (in LBAs): 131072 (0GiB) 00:11:56.392 NGUID: 5A4E7359B821464B90BC3AFB8E83755C 00:11:56.392 UUID: 5a4e7359-b821-464b-90bc-3afb8e83755c 00:11:56.392 Thin Provisioning: Not Supported 00:11:56.392 Per-NS Atomic Units: Yes 00:11:56.392 Atomic Boundary Size (Normal): 0 00:11:56.392 Atomic Boundary Size (PFail): 0 00:11:56.392 Atomic Boundary Offset: 0 00:11:56.392 Maximum Single Source Range Length: 65535 00:11:56.392 Maximum Copy Length: 65535 00:11:56.392 Maximum Source Range Count: 1 00:11:56.392 NGUID/EUI64 Never Reused: No 00:11:56.392 Namespace Write Protected: No 00:11:56.392 Number of LBA Formats: 1 00:11:56.392 Current LBA Format: LBA Format #00 00:11:56.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.392 00:11:56.651 11:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:56.651 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.651 [2024-07-15 11:27:31.070568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.924 Initializing NVMe Controllers 00:12:01.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:01.924 Initialization complete. Launching workers. 00:12:01.924 ======================================================== 00:12:01.924 Latency(us) 00:12:01.924 Device Information : IOPS MiB/s Average min max 00:12:01.924 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 18642.00 72.82 6873.51 2668.13 14587.68 00:12:01.924 ======================================================== 00:12:01.924 Total : 18642.00 72.82 6873.51 2668.13 14587.68 00:12:01.924 00:12:01.924 [2024-07-15 11:27:36.097649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.924 11:27:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:01.924 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.924 [2024-07-15 11:27:36.377559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.192 Initializing NVMe Controllers 00:12:07.192 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:07.192 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:07.192 Initialization complete. Launching workers. 00:12:07.192 ======================================================== 00:12:07.192 Latency(us) 00:12:07.192 Device Information : IOPS MiB/s Average min max 00:12:07.192 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15591.36 60.90 8215.27 7044.08 15044.15 00:12:07.192 ======================================================== 00:12:07.192 Total : 15591.36 60.90 8215.27 7044.08 15044.15 00:12:07.192 00:12:07.192 [2024-07-15 11:27:41.421353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.192 11:27:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:07.192 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.451 [2024-07-15 11:27:41.705341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.725 [2024-07-15 11:27:46.775690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.725 Initializing NVMe Controllers 00:12:12.725 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.725 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.726 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:12.726 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:12.726 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:12.726 Initialization complete. Launching workers. 00:12:12.726 Starting thread on core 2 00:12:12.726 Starting thread on core 3 00:12:12.726 Starting thread on core 1 00:12:12.726 11:27:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:12.726 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.726 [2024-07-15 11:27:47.127983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.016 [2024-07-15 11:27:50.193981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.016 Initializing NVMe Controllers 00:12:16.016 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.016 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.016 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:16.016 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:16.016 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:16.016 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:16.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:16.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:16.016 Initialization complete. Launching workers. 00:12:16.016 Starting thread on core 1 with urgent priority queue 00:12:16.016 Starting thread on core 2 with urgent priority queue 00:12:16.016 Starting thread on core 3 with urgent priority queue 00:12:16.016 Starting thread on core 0 with urgent priority queue 00:12:16.016 SPDK bdev Controller (SPDK1 ) core 0: 6627.33 IO/s 15.09 secs/100000 ios 00:12:16.016 SPDK bdev Controller (SPDK1 ) core 1: 5485.67 IO/s 18.23 secs/100000 ios 00:12:16.016 SPDK bdev Controller (SPDK1 ) core 2: 7109.33 IO/s 14.07 secs/100000 ios 00:12:16.016 SPDK bdev Controller (SPDK1 ) core 3: 3992.67 IO/s 25.05 secs/100000 ios 00:12:16.016 ======================================================== 00:12:16.016 00:12:16.017 11:27:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:16.017 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.275 [2024-07-15 11:27:50.519540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.275 Initializing NVMe Controllers 00:12:16.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:16.275 Namespace ID: 1 size: 0GB 00:12:16.275 Initialization complete. 00:12:16.275 INFO: using host memory buffer for IO 00:12:16.275 Hello world! 00:12:16.275 [2024-07-15 11:27:50.554939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.275 11:27:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:16.275 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.534 [2024-07-15 11:27:50.858371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.472 Initializing NVMe Controllers 00:12:17.472 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:17.472 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:17.472 Initialization complete. Launching workers. 00:12:17.472 submit (in ns) avg, min, max = 9381.1, 4546.4, 4002200.0 00:12:17.472 complete (in ns) avg, min, max = 51361.7, 2703.6, 4002317.3 00:12:17.472 00:12:17.472 Submit histogram 00:12:17.472 ================ 00:12:17.472 Range in us Cumulative Count 00:12:17.472 4.538 - 4.567: 0.4881% ( 34) 00:12:17.472 4.567 - 4.596: 2.7850% ( 160) 00:12:17.472 4.596 - 4.625: 5.7852% ( 209) 00:12:17.472 4.625 - 4.655: 9.9053% ( 287) 00:12:17.472 4.655 - 4.684: 22.9400% ( 908) 00:12:17.472 4.684 - 4.713: 35.5297% ( 877) 00:12:17.472 4.713 - 4.742: 46.4542% ( 761) 00:12:17.472 4.742 - 4.771: 58.0247% ( 806) 00:12:17.472 4.771 - 4.800: 67.9443% ( 691) 00:12:17.472 4.800 - 4.829: 77.1605% ( 642) 00:12:17.472 4.829 - 4.858: 82.4146% ( 366) 00:12:17.472 4.858 - 4.887: 85.0416% ( 183) 00:12:17.472 4.887 - 4.916: 86.7069% ( 116) 00:12:17.472 4.916 - 4.945: 88.4008% ( 118) 00:12:17.472 4.945 - 4.975: 90.4106% ( 140) 00:12:17.472 4.975 - 5.004: 92.2050% ( 125) 00:12:17.472 5.004 - 5.033: 94.1860% ( 138) 00:12:17.472 5.033 - 5.062: 96.1815% ( 139) 00:12:17.472 5.062 - 5.091: 97.5022% ( 92) 00:12:17.472 5.091 - 5.120: 98.2199% ( 50) 00:12:17.472 5.120 - 5.149: 98.8803% ( 46) 00:12:17.472 5.149 - 5.178: 99.3253% ( 31) 00:12:17.472 5.178 - 5.207: 99.3971% ( 5) 00:12:17.472 5.207 - 5.236: 99.4688% ( 5) 00:12:17.472 5.236 - 5.265: 99.4832% ( 1) 00:12:17.472 5.295 - 5.324: 99.4976% ( 1) 00:12:17.472 8.145 - 8.204: 99.5119% ( 1) 00:12:17.472 8.204 - 8.262: 99.5263% ( 1) 00:12:17.472 8.378 - 8.436: 99.5406% ( 1) 00:12:17.472 8.495 - 8.553: 99.5693% ( 2) 00:12:17.472 8.785 - 8.844: 99.5837% ( 1) 00:12:17.472 9.018 - 9.076: 99.5980% ( 1) 00:12:17.472 9.135 - 9.193: 99.6124% ( 1) 00:12:17.472 9.193 - 9.251: 99.6268% ( 1) 00:12:17.472 9.251 - 9.309: 99.6411% ( 1) 00:12:17.472 9.309 - 9.367: 99.6842% ( 3) 00:12:17.472 9.425 - 9.484: 99.6985% ( 1) 00:12:17.472 9.716 - 9.775: 99.7129% ( 1) 00:12:17.472 9.775 - 9.833: 99.7272% ( 1) 00:12:17.472 10.065 - 10.124: 99.7416% ( 1) 00:12:17.472 10.124 - 10.182: 99.7560% ( 1) 00:12:17.472 10.182 - 10.240: 99.7703% ( 1) 00:12:17.472 10.240 - 10.298: 99.7847% ( 1) 00:12:17.472 10.298 - 10.356: 99.7990% ( 1) 00:12:17.472 10.356 - 10.415: 99.8134% ( 1) 00:12:17.472 10.589 - 10.647: 99.8277% ( 1) 00:12:17.472 10.822 - 10.880: 99.8564% ( 2) 00:12:17.472 11.578 - 11.636: 99.8708% ( 1) 00:12:17.472 14.604 - 14.662: 99.8852% ( 1) 00:12:17.472 3991.738 - 4021.527: 100.0000% ( 8) 00:12:17.472 00:12:17.472 Complete histogram 00:12:17.472 ================== 00:12:17.472 Range in us Cumulative Count 00:12:17.472 2.691 - 2.705: 0.0144% ( 1) 00:12:17.472 2.705 - 2.720: 0.8613% ( 59) 00:12:17.472 2.720 - 2.735: 6.8045% ( 414) 00:12:17.472 2.735 - 2.749: 14.6138% ( 544) 00:12:17.472 2.749 - 2.764: 18.0304% ( 238) 00:12:17.472 2.764 - 2.778: 20.0976% ( 144) 00:12:17.472 2.778 - 2.793: 24.9785% ( 340) 00:12:17.472 2.793 - 2.807: 50.5168% ( 1779) 00:12:17.472 2.807 - 2.822: 81.9552% ( 2190) 00:12:17.472 2.822 - 2.836: 90.5254% ( 597) 00:12:17.472 2.836 - 2.851: 92.8366% ( 161) 00:12:17.472 2.851 - 2.865: 94.3583% ( 106) 00:12:17.472 2.865 - 2.880: 95.0617% ( 49) 00:12:17.472 2.880 - 2.895: 95.4350% ( 26) 00:12:17.472 2.895 - 2.909: 96.6265% ( 83) 00:12:17.472 2.909 - 2.924: 97.7175% ( 76) 00:12:17.472 2.924 - 2.938: 98.2056% ( 34) 00:12:17.472 2.938 - 2.953: 98.3778% ( 12) 00:12:17.472 2.953 - 2.967: 98.4496% ( 5) 00:12:17.472 2.967 - 2.982: 98.5214% ( 5) 00:12:17.472 2.982 - 2.996: 98.5357% ( 1) 00:12:17.472 2.996 - 3.011: 98.5501% ( 1) 00:12:17.472 3.011 - 3.025: 98.5788% ( 2) 00:12:17.472 6.429 - 6.458: 98.5932% ( 1) 00:12:17.472 6.633 - 6.662: 98.6075% ( 1) 00:12:17.472 6.691 - 6.720: 98.6219% ( 1) 00:12:17.472 6.749 - 6.778: 98.6362% ( 1) 00:12:17.472 6.836 - [2024-07-15 11:27:51.886121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.732 6.865: 98.6506% ( 1) 00:12:17.732 6.924 - 6.953: 98.6649% ( 1) 00:12:17.732 7.331 - 7.360: 98.6793% ( 1) 00:12:17.732 7.360 - 7.389: 98.6937% ( 1) 00:12:17.732 7.418 - 7.447: 98.7080% ( 1) 00:12:17.732 7.971 - 8.029: 98.7224% ( 1) 00:12:17.732 8.262 - 8.320: 98.7511% ( 2) 00:12:17.732 8.436 - 8.495: 98.7654% ( 1) 00:12:17.732 8.553 - 8.611: 98.7798% ( 1) 00:12:17.732 2487.389 - 2502.284: 98.7941% ( 1) 00:12:17.732 3991.738 - 4021.527: 100.0000% ( 84) 00:12:17.732 00:12:17.732 11:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:17.732 11:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:17.732 11:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:17.732 11:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:17.732 11:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:17.732 [ 00:12:17.732 { 00:12:17.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:17.732 "subtype": "Discovery", 00:12:17.732 "listen_addresses": [], 00:12:17.732 "allow_any_host": true, 00:12:17.732 "hosts": [] 00:12:17.732 }, 00:12:17.732 { 00:12:17.732 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:17.732 "subtype": "NVMe", 00:12:17.732 "listen_addresses": [ 00:12:17.732 { 00:12:17.732 "trtype": "VFIOUSER", 00:12:17.732 "adrfam": "IPv4", 00:12:17.732 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:17.732 "trsvcid": "0" 00:12:17.732 } 00:12:17.732 ], 00:12:17.732 "allow_any_host": true, 00:12:17.732 "hosts": [], 00:12:17.732 "serial_number": "SPDK1", 00:12:17.732 "model_number": "SPDK bdev Controller", 00:12:17.732 "max_namespaces": 32, 00:12:17.732 "min_cntlid": 1, 00:12:17.732 "max_cntlid": 65519, 00:12:17.732 "namespaces": [ 00:12:17.732 { 00:12:17.732 "nsid": 1, 00:12:17.732 "bdev_name": "Malloc1", 00:12:17.732 "name": "Malloc1", 00:12:17.732 "nguid": "5A4E7359B821464B90BC3AFB8E83755C", 00:12:17.732 "uuid": "5a4e7359-b821-464b-90bc-3afb8e83755c" 00:12:17.732 } 00:12:17.732 ] 00:12:17.732 }, 00:12:17.732 { 00:12:17.732 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:17.732 "subtype": "NVMe", 00:12:17.732 "listen_addresses": [ 00:12:17.732 { 00:12:17.732 "trtype": "VFIOUSER", 00:12:17.732 "adrfam": "IPv4", 00:12:17.732 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:17.732 "trsvcid": "0" 00:12:17.732 } 00:12:17.732 ], 00:12:17.732 "allow_any_host": true, 00:12:17.732 "hosts": [], 00:12:17.732 "serial_number": "SPDK2", 00:12:17.732 "model_number": "SPDK bdev Controller", 00:12:17.732 "max_namespaces": 32, 00:12:17.732 "min_cntlid": 1, 00:12:17.732 "max_cntlid": 65519, 00:12:17.732 "namespaces": [ 00:12:17.732 { 00:12:17.732 "nsid": 1, 00:12:17.732 "bdev_name": "Malloc2", 00:12:17.732 "name": "Malloc2", 00:12:17.732 "nguid": "DC7A98F45C4A4677BC47A47B3663A758", 00:12:17.732 "uuid": "dc7a98f4-5c4a-4677-bc47-a47b3663a758" 00:12:17.732 } 00:12:17.732 ] 00:12:17.732 } 00:12:17.732 ] 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2706362 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:17.732 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:17.732 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.991 [2024-07-15 11:27:52.285108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.991 Malloc3 00:12:17.991 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:18.250 [2024-07-15 11:27:52.646392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.250 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:18.250 Asynchronous Event Request test 00:12:18.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.250 Registering asynchronous event callbacks... 00:12:18.250 Starting namespace attribute notice tests for all controllers... 00:12:18.250 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:18.250 aer_cb - Changed Namespace 00:12:18.250 Cleaning up... 00:12:18.509 [ 00:12:18.509 { 00:12:18.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:18.509 "subtype": "Discovery", 00:12:18.509 "listen_addresses": [], 00:12:18.509 "allow_any_host": true, 00:12:18.509 "hosts": [] 00:12:18.509 }, 00:12:18.509 { 00:12:18.509 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:18.509 "subtype": "NVMe", 00:12:18.509 "listen_addresses": [ 00:12:18.509 { 00:12:18.509 "trtype": "VFIOUSER", 00:12:18.509 "adrfam": "IPv4", 00:12:18.509 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:18.509 "trsvcid": "0" 00:12:18.509 } 00:12:18.509 ], 00:12:18.509 "allow_any_host": true, 00:12:18.509 "hosts": [], 00:12:18.509 "serial_number": "SPDK1", 00:12:18.509 "model_number": "SPDK bdev Controller", 00:12:18.509 "max_namespaces": 32, 00:12:18.509 "min_cntlid": 1, 00:12:18.509 "max_cntlid": 65519, 00:12:18.509 "namespaces": [ 00:12:18.509 { 00:12:18.509 "nsid": 1, 00:12:18.509 "bdev_name": "Malloc1", 00:12:18.509 "name": "Malloc1", 00:12:18.509 "nguid": "5A4E7359B821464B90BC3AFB8E83755C", 00:12:18.509 "uuid": "5a4e7359-b821-464b-90bc-3afb8e83755c" 00:12:18.509 }, 00:12:18.509 { 00:12:18.509 "nsid": 2, 00:12:18.509 "bdev_name": "Malloc3", 00:12:18.509 "name": "Malloc3", 00:12:18.509 "nguid": "AAA4DEB2C0E14BFB8613F86BACEC769E", 00:12:18.509 "uuid": "aaa4deb2-c0e1-4bfb-8613-f86bacec769e" 00:12:18.509 } 00:12:18.509 ] 00:12:18.509 }, 00:12:18.509 { 00:12:18.509 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:18.509 "subtype": "NVMe", 00:12:18.509 "listen_addresses": [ 00:12:18.509 { 00:12:18.509 "trtype": "VFIOUSER", 00:12:18.509 "adrfam": "IPv4", 00:12:18.509 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:18.509 "trsvcid": "0" 00:12:18.509 } 00:12:18.509 ], 00:12:18.509 "allow_any_host": true, 00:12:18.509 "hosts": [], 00:12:18.509 "serial_number": "SPDK2", 00:12:18.509 "model_number": "SPDK bdev Controller", 00:12:18.509 "max_namespaces": 32, 00:12:18.509 "min_cntlid": 1, 00:12:18.509 "max_cntlid": 65519, 00:12:18.509 "namespaces": [ 00:12:18.509 { 00:12:18.509 "nsid": 1, 00:12:18.509 "bdev_name": "Malloc2", 00:12:18.509 "name": "Malloc2", 00:12:18.509 "nguid": "DC7A98F45C4A4677BC47A47B3663A758", 00:12:18.509 "uuid": "dc7a98f4-5c4a-4677-bc47-a47b3663a758" 00:12:18.509 } 00:12:18.509 ] 00:12:18.509 } 00:12:18.509 ] 00:12:18.509 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2706362 00:12:18.509 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.510 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:18.510 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:18.510 11:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:18.510 [2024-07-15 11:27:52.961150] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:12:18.510 [2024-07-15 11:27:52.961185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706374 ] 00:12:18.510 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.772 [2024-07-15 11:27:52.998635] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:18.772 [2024-07-15 11:27:53.001479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.772 [2024-07-15 11:27:53.001509] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa41ead0000 00:12:18.772 [2024-07-15 11:27:53.002477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.003486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.004502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.005509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.006518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.007527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.008543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.009552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.772 [2024-07-15 11:27:53.010560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.772 [2024-07-15 11:27:53.010573] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa41eac5000 00:12:18.772 [2024-07-15 11:27:53.011979] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.772 [2024-07-15 11:27:53.031740] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:18.772 [2024-07-15 11:27:53.031773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:18.772 [2024-07-15 11:27:53.033864] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:18.772 [2024-07-15 11:27:53.033915] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:18.772 [2024-07-15 11:27:53.034014] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:18.772 [2024-07-15 11:27:53.034034] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:18.772 [2024-07-15 11:27:53.034042] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:18.772 [2024-07-15 11:27:53.034867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:18.772 [2024-07-15 11:27:53.034880] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:18.772 [2024-07-15 11:27:53.034890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:18.772 [2024-07-15 11:27:53.035867] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:18.772 [2024-07-15 11:27:53.035880] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:18.772 [2024-07-15 11:27:53.035891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.036880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:18.773 [2024-07-15 11:27:53.036897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.037892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:18.773 [2024-07-15 11:27:53.037905] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:18.773 [2024-07-15 11:27:53.037912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.037921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.038028] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:18.773 [2024-07-15 11:27:53.038034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.038041] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:18.773 [2024-07-15 11:27:53.038900] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:18.773 [2024-07-15 11:27:53.039915] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:18.773 [2024-07-15 11:27:53.040921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:18.773 [2024-07-15 11:27:53.041924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.773 [2024-07-15 11:27:53.041975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:18.773 [2024-07-15 11:27:53.042951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:18.773 [2024-07-15 11:27:53.042966] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:18.773 [2024-07-15 11:27:53.042972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.042998] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:18.773 [2024-07-15 11:27:53.043008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.043023] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.773 [2024-07-15 11:27:53.043030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.773 [2024-07-15 11:27:53.043045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.049264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.049280] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:18.773 [2024-07-15 11:27:53.049290] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:18.773 [2024-07-15 11:27:53.049296] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:18.773 [2024-07-15 11:27:53.049305] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:18.773 [2024-07-15 11:27:53.049312] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:18.773 [2024-07-15 11:27:53.049318] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:18.773 [2024-07-15 11:27:53.049324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.049334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.049347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.057265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.057286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.773 [2024-07-15 11:27:53.057298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.773 [2024-07-15 11:27:53.057310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.773 [2024-07-15 11:27:53.057322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.773 [2024-07-15 11:27:53.057329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.057339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.057351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.065262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.065274] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:18.773 [2024-07-15 11:27:53.065281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.065289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.065297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.065308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.073263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.073340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.073351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.073361] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:18.773 [2024-07-15 11:27:53.073371] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:18.773 [2024-07-15 11:27:53.073379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.081268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.081285] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:18.773 [2024-07-15 11:27:53.081300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.081311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.081320] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.773 [2024-07-15 11:27:53.081326] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.773 [2024-07-15 11:27:53.081335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.089265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.089285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.089296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:18.773 [2024-07-15 11:27:53.089307] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.773 [2024-07-15 11:27:53.089312] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.773 [2024-07-15 11:27:53.089320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.773 [2024-07-15 11:27:53.097265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:18.773 [2024-07-15 11:27:53.097280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097330] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:18.774 [2024-07-15 11:27:53.097336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:18.774 [2024-07-15 11:27:53.097342] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:18.774 [2024-07-15 11:27:53.097362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.105266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.105284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.113266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.113284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.121266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.121284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.129264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.129286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:18.774 [2024-07-15 11:27:53.129293] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:18.774 [2024-07-15 11:27:53.129298] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:18.774 [2024-07-15 11:27:53.129303] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:18.774 [2024-07-15 11:27:53.129311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:18.774 [2024-07-15 11:27:53.129320] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:18.774 [2024-07-15 11:27:53.129326] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:18.774 [2024-07-15 11:27:53.129334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.129343] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:18.774 [2024-07-15 11:27:53.129349] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.774 [2024-07-15 11:27:53.129356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.129366] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:18.774 [2024-07-15 11:27:53.129372] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:18.774 [2024-07-15 11:27:53.129379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:18.774 [2024-07-15 11:27:53.137265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.137285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.137299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:18.774 [2024-07-15 11:27:53.137309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:18.774 ===================================================== 00:12:18.774 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.774 ===================================================== 00:12:18.774 Controller Capabilities/Features 00:12:18.774 ================================ 00:12:18.774 Vendor ID: 4e58 00:12:18.774 Subsystem Vendor ID: 4e58 00:12:18.774 Serial Number: SPDK2 00:12:18.774 Model Number: SPDK bdev Controller 00:12:18.774 Firmware Version: 24.09 00:12:18.774 Recommended Arb Burst: 6 00:12:18.774 IEEE OUI Identifier: 8d 6b 50 00:12:18.774 Multi-path I/O 00:12:18.774 May have multiple subsystem ports: Yes 00:12:18.774 May have multiple controllers: Yes 00:12:18.774 Associated with SR-IOV VF: No 00:12:18.774 Max Data Transfer Size: 131072 00:12:18.774 Max Number of Namespaces: 32 00:12:18.774 Max Number of I/O Queues: 127 00:12:18.774 NVMe Specification Version (VS): 1.3 00:12:18.774 NVMe Specification Version (Identify): 1.3 00:12:18.774 Maximum Queue Entries: 256 00:12:18.774 Contiguous Queues Required: Yes 00:12:18.774 Arbitration Mechanisms Supported 00:12:18.774 Weighted Round Robin: Not Supported 00:12:18.774 Vendor Specific: Not Supported 00:12:18.774 Reset Timeout: 15000 ms 00:12:18.774 Doorbell Stride: 4 bytes 00:12:18.774 NVM Subsystem Reset: Not Supported 00:12:18.774 Command Sets Supported 00:12:18.774 NVM Command Set: Supported 00:12:18.774 Boot Partition: Not Supported 00:12:18.774 Memory Page Size Minimum: 4096 bytes 00:12:18.774 Memory Page Size Maximum: 4096 bytes 00:12:18.774 Persistent Memory Region: Not Supported 00:12:18.774 Optional Asynchronous Events Supported 00:12:18.774 Namespace Attribute Notices: Supported 00:12:18.774 Firmware Activation Notices: Not Supported 00:12:18.774 ANA Change Notices: Not Supported 00:12:18.774 PLE Aggregate Log Change Notices: Not Supported 00:12:18.774 LBA Status Info Alert Notices: Not Supported 00:12:18.774 EGE Aggregate Log Change Notices: Not Supported 00:12:18.774 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.774 Zone Descriptor Change Notices: Not Supported 00:12:18.774 Discovery Log Change Notices: Not Supported 00:12:18.774 Controller Attributes 00:12:18.774 128-bit Host Identifier: Supported 00:12:18.774 Non-Operational Permissive Mode: Not Supported 00:12:18.774 NVM Sets: Not Supported 00:12:18.774 Read Recovery Levels: Not Supported 00:12:18.774 Endurance Groups: Not Supported 00:12:18.774 Predictable Latency Mode: Not Supported 00:12:18.774 Traffic Based Keep ALive: Not Supported 00:12:18.774 Namespace Granularity: Not Supported 00:12:18.774 SQ Associations: Not Supported 00:12:18.774 UUID List: Not Supported 00:12:18.774 Multi-Domain Subsystem: Not Supported 00:12:18.774 Fixed Capacity Management: Not Supported 00:12:18.774 Variable Capacity Management: Not Supported 00:12:18.774 Delete Endurance Group: Not Supported 00:12:18.774 Delete NVM Set: Not Supported 00:12:18.774 Extended LBA Formats Supported: Not Supported 00:12:18.774 Flexible Data Placement Supported: Not Supported 00:12:18.774 00:12:18.774 Controller Memory Buffer Support 00:12:18.774 ================================ 00:12:18.774 Supported: No 00:12:18.774 00:12:18.774 Persistent Memory Region Support 00:12:18.774 ================================ 00:12:18.774 Supported: No 00:12:18.774 00:12:18.774 Admin Command Set Attributes 00:12:18.774 ============================ 00:12:18.774 Security Send/Receive: Not Supported 00:12:18.774 Format NVM: Not Supported 00:12:18.774 Firmware Activate/Download: Not Supported 00:12:18.774 Namespace Management: Not Supported 00:12:18.774 Device Self-Test: Not Supported 00:12:18.774 Directives: Not Supported 00:12:18.774 NVMe-MI: Not Supported 00:12:18.774 Virtualization Management: Not Supported 00:12:18.774 Doorbell Buffer Config: Not Supported 00:12:18.774 Get LBA Status Capability: Not Supported 00:12:18.774 Command & Feature Lockdown Capability: Not Supported 00:12:18.774 Abort Command Limit: 4 00:12:18.774 Async Event Request Limit: 4 00:12:18.774 Number of Firmware Slots: N/A 00:12:18.774 Firmware Slot 1 Read-Only: N/A 00:12:18.774 Firmware Activation Without Reset: N/A 00:12:18.774 Multiple Update Detection Support: N/A 00:12:18.774 Firmware Update Granularity: No Information Provided 00:12:18.774 Per-Namespace SMART Log: No 00:12:18.774 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.774 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:18.774 Command Effects Log Page: Supported 00:12:18.774 Get Log Page Extended Data: Supported 00:12:18.774 Telemetry Log Pages: Not Supported 00:12:18.774 Persistent Event Log Pages: Not Supported 00:12:18.774 Supported Log Pages Log Page: May Support 00:12:18.774 Commands Supported & Effects Log Page: Not Supported 00:12:18.774 Feature Identifiers & Effects Log Page:May Support 00:12:18.774 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.774 Data Area 4 for Telemetry Log: Not Supported 00:12:18.774 Error Log Page Entries Supported: 128 00:12:18.774 Keep Alive: Supported 00:12:18.774 Keep Alive Granularity: 10000 ms 00:12:18.774 00:12:18.774 NVM Command Set Attributes 00:12:18.774 ========================== 00:12:18.774 Submission Queue Entry Size 00:12:18.774 Max: 64 00:12:18.774 Min: 64 00:12:18.774 Completion Queue Entry Size 00:12:18.774 Max: 16 00:12:18.775 Min: 16 00:12:18.775 Number of Namespaces: 32 00:12:18.775 Compare Command: Supported 00:12:18.775 Write Uncorrectable Command: Not Supported 00:12:18.775 Dataset Management Command: Supported 00:12:18.775 Write Zeroes Command: Supported 00:12:18.775 Set Features Save Field: Not Supported 00:12:18.775 Reservations: Not Supported 00:12:18.775 Timestamp: Not Supported 00:12:18.775 Copy: Supported 00:12:18.775 Volatile Write Cache: Present 00:12:18.775 Atomic Write Unit (Normal): 1 00:12:18.775 Atomic Write Unit (PFail): 1 00:12:18.775 Atomic Compare & Write Unit: 1 00:12:18.775 Fused Compare & Write: Supported 00:12:18.775 Scatter-Gather List 00:12:18.775 SGL Command Set: Supported (Dword aligned) 00:12:18.775 SGL Keyed: Not Supported 00:12:18.775 SGL Bit Bucket Descriptor: Not Supported 00:12:18.775 SGL Metadata Pointer: Not Supported 00:12:18.775 Oversized SGL: Not Supported 00:12:18.775 SGL Metadata Address: Not Supported 00:12:18.775 SGL Offset: Not Supported 00:12:18.775 Transport SGL Data Block: Not Supported 00:12:18.775 Replay Protected Memory Block: Not Supported 00:12:18.775 00:12:18.775 Firmware Slot Information 00:12:18.775 ========================= 00:12:18.775 Active slot: 1 00:12:18.775 Slot 1 Firmware Revision: 24.09 00:12:18.775 00:12:18.775 00:12:18.775 Commands Supported and Effects 00:12:18.775 ============================== 00:12:18.775 Admin Commands 00:12:18.775 -------------- 00:12:18.775 Get Log Page (02h): Supported 00:12:18.775 Identify (06h): Supported 00:12:18.775 Abort (08h): Supported 00:12:18.775 Set Features (09h): Supported 00:12:18.775 Get Features (0Ah): Supported 00:12:18.775 Asynchronous Event Request (0Ch): Supported 00:12:18.775 Keep Alive (18h): Supported 00:12:18.775 I/O Commands 00:12:18.775 ------------ 00:12:18.775 Flush (00h): Supported LBA-Change 00:12:18.775 Write (01h): Supported LBA-Change 00:12:18.775 Read (02h): Supported 00:12:18.775 Compare (05h): Supported 00:12:18.775 Write Zeroes (08h): Supported LBA-Change 00:12:18.775 Dataset Management (09h): Supported LBA-Change 00:12:18.775 Copy (19h): Supported LBA-Change 00:12:18.775 00:12:18.775 Error Log 00:12:18.775 ========= 00:12:18.775 00:12:18.775 Arbitration 00:12:18.775 =========== 00:12:18.775 Arbitration Burst: 1 00:12:18.775 00:12:18.775 Power Management 00:12:18.775 ================ 00:12:18.775 Number of Power States: 1 00:12:18.775 Current Power State: Power State #0 00:12:18.775 Power State #0: 00:12:18.775 Max Power: 0.00 W 00:12:18.775 Non-Operational State: Operational 00:12:18.775 Entry Latency: Not Reported 00:12:18.775 Exit Latency: Not Reported 00:12:18.775 Relative Read Throughput: 0 00:12:18.775 Relative Read Latency: 0 00:12:18.775 Relative Write Throughput: 0 00:12:18.775 Relative Write Latency: 0 00:12:18.775 Idle Power: Not Reported 00:12:18.775 Active Power: Not Reported 00:12:18.775 Non-Operational Permissive Mode: Not Supported 00:12:18.775 00:12:18.775 Health Information 00:12:18.775 ================== 00:12:18.775 Critical Warnings: 00:12:18.775 Available Spare Space: OK 00:12:18.775 Temperature: OK 00:12:18.775 Device Reliability: OK 00:12:18.775 Read Only: No 00:12:18.775 Volatile Memory Backup: OK 00:12:18.775 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:18.775 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:18.775 Available Spare: 0% 00:12:18.775 Available Sp[2024-07-15 11:27:53.137426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:18.775 [2024-07-15 11:27:53.145268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:18.775 [2024-07-15 11:27:53.145312] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:18.775 [2024-07-15 11:27:53.145327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.775 [2024-07-15 11:27:53.145338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.775 [2024-07-15 11:27:53.145348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.775 [2024-07-15 11:27:53.145356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.775 [2024-07-15 11:27:53.145432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:18.775 [2024-07-15 11:27:53.145448] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:18.775 [2024-07-15 11:27:53.146441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.775 [2024-07-15 11:27:53.146503] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:18.775 [2024-07-15 11:27:53.146513] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:18.775 [2024-07-15 11:27:53.147454] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:18.775 [2024-07-15 11:27:53.147470] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:18.775 [2024-07-15 11:27:53.147525] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:18.775 [2024-07-15 11:27:53.150267] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.775 are Threshold: 0% 00:12:18.775 Life Percentage Used: 0% 00:12:18.775 Data Units Read: 0 00:12:18.775 Data Units Written: 0 00:12:18.775 Host Read Commands: 0 00:12:18.775 Host Write Commands: 0 00:12:18.775 Controller Busy Time: 0 minutes 00:12:18.775 Power Cycles: 0 00:12:18.775 Power On Hours: 0 hours 00:12:18.775 Unsafe Shutdowns: 0 00:12:18.775 Unrecoverable Media Errors: 0 00:12:18.775 Lifetime Error Log Entries: 0 00:12:18.775 Warning Temperature Time: 0 minutes 00:12:18.775 Critical Temperature Time: 0 minutes 00:12:18.775 00:12:18.775 Number of Queues 00:12:18.775 ================ 00:12:18.775 Number of I/O Submission Queues: 127 00:12:18.775 Number of I/O Completion Queues: 127 00:12:18.775 00:12:18.775 Active Namespaces 00:12:18.775 ================= 00:12:18.775 Namespace ID:1 00:12:18.775 Error Recovery Timeout: Unlimited 00:12:18.775 Command Set Identifier: NVM (00h) 00:12:18.775 Deallocate: Supported 00:12:18.775 Deallocated/Unwritten Error: Not Supported 00:12:18.775 Deallocated Read Value: Unknown 00:12:18.775 Deallocate in Write Zeroes: Not Supported 00:12:18.775 Deallocated Guard Field: 0xFFFF 00:12:18.775 Flush: Supported 00:12:18.775 Reservation: Supported 00:12:18.775 Namespace Sharing Capabilities: Multiple Controllers 00:12:18.775 Size (in LBAs): 131072 (0GiB) 00:12:18.775 Capacity (in LBAs): 131072 (0GiB) 00:12:18.775 Utilization (in LBAs): 131072 (0GiB) 00:12:18.775 NGUID: DC7A98F45C4A4677BC47A47B3663A758 00:12:18.775 UUID: dc7a98f4-5c4a-4677-bc47-a47b3663a758 00:12:18.775 Thin Provisioning: Not Supported 00:12:18.775 Per-NS Atomic Units: Yes 00:12:18.775 Atomic Boundary Size (Normal): 0 00:12:18.775 Atomic Boundary Size (PFail): 0 00:12:18.775 Atomic Boundary Offset: 0 00:12:18.775 Maximum Single Source Range Length: 65535 00:12:18.775 Maximum Copy Length: 65535 00:12:18.775 Maximum Source Range Count: 1 00:12:18.775 NGUID/EUI64 Never Reused: No 00:12:18.775 Namespace Write Protected: No 00:12:18.775 Number of LBA Formats: 1 00:12:18.775 Current LBA Format: LBA Format #00 00:12:18.775 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.775 00:12:18.775 11:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:19.034 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.034 [2024-07-15 11:27:53.420022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:24.305 Initializing NVMe Controllers 00:12:24.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:24.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:24.305 Initialization complete. Launching workers. 00:12:24.305 ======================================================== 00:12:24.305 Latency(us) 00:12:24.305 Device Information : IOPS MiB/s Average min max 00:12:24.305 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 18638.69 72.81 6868.11 2670.00 14565.17 00:12:24.305 ======================================================== 00:12:24.305 Total : 18638.69 72.81 6868.11 2670.00 14565.17 00:12:24.305 00:12:24.305 [2024-07-15 11:27:58.524571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.305 11:27:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:24.305 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.563 [2024-07-15 11:27:58.813944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.898 Initializing NVMe Controllers 00:12:29.898 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.898 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:29.898 Initialization complete. Launching workers. 00:12:29.898 ======================================================== 00:12:29.898 Latency(us) 00:12:29.898 Device Information : IOPS MiB/s Average min max 00:12:29.898 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24124.59 94.24 5306.35 1571.93 7501.39 00:12:29.898 ======================================================== 00:12:29.898 Total : 24124.59 94.24 5306.35 1571.93 7501.39 00:12:29.898 00:12:29.898 [2024-07-15 11:28:03.837712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.898 11:28:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:29.898 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.898 [2024-07-15 11:28:04.122956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.168 [2024-07-15 11:28:09.275399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.168 Initializing NVMe Controllers 00:12:35.168 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:35.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:35.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:35.169 Initialization complete. Launching workers. 00:12:35.169 Starting thread on core 2 00:12:35.169 Starting thread on core 3 00:12:35.169 Starting thread on core 1 00:12:35.169 11:28:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:35.169 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.169 [2024-07-15 11:28:09.621917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.454 [2024-07-15 11:28:12.693963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.454 Initializing NVMe Controllers 00:12:38.454 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.454 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:38.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:38.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:38.454 Initialization complete. Launching workers. 00:12:38.454 Starting thread on core 1 with urgent priority queue 00:12:38.454 Starting thread on core 2 with urgent priority queue 00:12:38.454 Starting thread on core 3 with urgent priority queue 00:12:38.454 Starting thread on core 0 with urgent priority queue 00:12:38.454 SPDK bdev Controller (SPDK2 ) core 0: 6609.33 IO/s 15.13 secs/100000 ios 00:12:38.454 SPDK bdev Controller (SPDK2 ) core 1: 3964.33 IO/s 25.22 secs/100000 ios 00:12:38.454 SPDK bdev Controller (SPDK2 ) core 2: 5058.67 IO/s 19.77 secs/100000 ios 00:12:38.454 SPDK bdev Controller (SPDK2 ) core 3: 6564.33 IO/s 15.23 secs/100000 ios 00:12:38.454 ======================================================== 00:12:38.454 00:12:38.454 11:28:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:38.454 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.713 [2024-07-15 11:28:13.019959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.713 Initializing NVMe Controllers 00:12:38.713 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.713 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:38.713 Namespace ID: 1 size: 0GB 00:12:38.713 Initialization complete. 00:12:38.713 INFO: using host memory buffer for IO 00:12:38.713 Hello world! 00:12:38.713 [2024-07-15 11:28:13.029655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.713 11:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:38.713 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.973 [2024-07-15 11:28:13.361094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.349 Initializing NVMe Controllers 00:12:40.349 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.349 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.349 Initialization complete. Launching workers. 00:12:40.349 submit (in ns) avg, min, max = 8688.1, 4569.1, 4030202.7 00:12:40.349 complete (in ns) avg, min, max = 41408.0, 2716.4, 4249014.5 00:12:40.349 00:12:40.349 Submit histogram 00:12:40.349 ================ 00:12:40.349 Range in us Cumulative Count 00:12:40.349 4.567 - 4.596: 0.6755% ( 63) 00:12:40.349 4.596 - 4.625: 2.7343% ( 192) 00:12:40.349 4.625 - 4.655: 5.4686% ( 255) 00:12:40.349 4.655 - 4.684: 8.9106% ( 321) 00:12:40.349 4.684 - 4.713: 20.2981% ( 1062) 00:12:40.349 4.713 - 4.742: 32.4255% ( 1131) 00:12:40.349 4.742 - 4.771: 43.5664% ( 1039) 00:12:40.349 4.771 - 4.800: 55.5222% ( 1115) 00:12:40.349 4.800 - 4.829: 65.9018% ( 968) 00:12:40.349 4.829 - 4.858: 75.7774% ( 921) 00:12:40.349 4.858 - 4.887: 81.9430% ( 575) 00:12:40.349 4.887 - 4.916: 84.9131% ( 277) 00:12:40.349 4.916 - 4.945: 86.6181% ( 159) 00:12:40.349 4.945 - 4.975: 88.1621% ( 144) 00:12:40.349 4.975 - 5.004: 90.0815% ( 179) 00:12:40.349 5.004 - 5.033: 92.0116% ( 180) 00:12:40.349 5.033 - 5.062: 93.8988% ( 176) 00:12:40.349 5.062 - 5.091: 95.7324% ( 171) 00:12:40.349 5.091 - 5.120: 97.1156% ( 129) 00:12:40.349 5.120 - 5.149: 98.1128% ( 93) 00:12:40.349 5.149 - 5.178: 98.7240% ( 57) 00:12:40.349 5.178 - 5.207: 99.1529% ( 40) 00:12:40.349 5.207 - 5.236: 99.3137% ( 15) 00:12:40.349 5.236 - 5.265: 99.3566% ( 4) 00:12:40.349 5.265 - 5.295: 99.4103% ( 5) 00:12:40.349 5.295 - 5.324: 99.4210% ( 1) 00:12:40.349 5.324 - 5.353: 99.4317% ( 1) 00:12:40.349 5.353 - 5.382: 99.4639% ( 3) 00:12:40.349 5.382 - 5.411: 99.4746% ( 1) 00:12:40.349 5.440 - 5.469: 99.4960% ( 2) 00:12:40.349 7.564 - 7.622: 99.5068% ( 1) 00:12:40.349 7.855 - 7.913: 99.5175% ( 1) 00:12:40.349 8.087 - 8.145: 99.5282% ( 1) 00:12:40.349 8.785 - 8.844: 99.5496% ( 2) 00:12:40.349 8.902 - 8.960: 99.5711% ( 2) 00:12:40.349 9.018 - 9.076: 99.5925% ( 2) 00:12:40.349 9.076 - 9.135: 99.6140% ( 2) 00:12:40.349 9.309 - 9.367: 99.6354% ( 2) 00:12:40.349 9.425 - 9.484: 99.6462% ( 1) 00:12:40.349 9.600 - 9.658: 99.6676% ( 2) 00:12:40.349 9.658 - 9.716: 99.6890% ( 2) 00:12:40.349 9.716 - 9.775: 99.7105% ( 2) 00:12:40.349 9.775 - 9.833: 99.7212% ( 1) 00:12:40.349 9.833 - 9.891: 99.7319% ( 1) 00:12:40.349 9.891 - 9.949: 99.7427% ( 1) 00:12:40.349 9.949 - 10.007: 99.7534% ( 1) 00:12:40.349 10.007 - 10.065: 99.7855% ( 3) 00:12:40.349 10.124 - 10.182: 99.8070% ( 2) 00:12:40.349 10.182 - 10.240: 99.8177% ( 1) 00:12:40.349 10.356 - 10.415: 99.8284% ( 1) 00:12:40.349 10.531 - 10.589: 99.8392% ( 1) 00:12:40.349 10.589 - 10.647: 99.8499% ( 1) 00:12:40.349 10.938 - 10.996: 99.8606% ( 1) 00:12:40.349 11.404 - 11.462: 99.8713% ( 1) 00:12:40.349 12.044 - 12.102: 99.8821% ( 1) 00:12:40.349 12.276 - 12.335: 99.8928% ( 1) 00:12:40.349 16.175 - 16.291: 99.9035% ( 1) 00:12:40.349 3991.738 - 4021.527: 99.9893% ( 8) 00:12:40.349 4021.527 - 4051.316: 100.0000% ( 1) 00:12:40.349 00:12:40.349 Complete histogram 00:12:40.349 ================== 00:12:40.349 Range in us Cumulative Count 00:12:40.349 2.705 - 2.720: 0.0214% ( 2) 00:12:40.349 2.720 - 2.735: 0.7399% ( 67) 00:12:40.349 2.735 - 2.749: 6.7553% ( 561) 00:12:40.349 2.749 - 2.764: 15.6659% ( 831) 00:12:40.349 2.764 - 2.778: 20.1265% ( 416) 00:12:40.349 2.778 - 2.793: 31.6534% ( 1075) 00:12:40.349 2.793 - 2.807: 63.2104% ( 2943) 00:12:40.349 2.807 - 2.822: 84.6987% ( 2004) 00:12:40.349 2.822 - 2.836: 89.9850% ( 493) 00:12:40.349 2.836 - 2.851: 92.7943% ( 262) 00:12:40.349 2.851 - 2.865: 94.7351% ( 181) 00:12:40.349 2.865 - 2.880: 95.5501% ( 76) 00:12:40.349 2.880 - 2.895: 96.8904% ( 125) 00:12:40.349 2.895 - 2.909: 98.0270% ( 106) 00:12:40.349 2.909 - 2.924: 98.4881% ( 43) 00:12:40.349 2.924 - 2.938: 98.6489% ( 15) 00:12:40.349 2.938 - 2.953: 98.7347% ( 8) 00:12:40.349 2.953 - 2.967: 98.8098% ( 7) 00:12:40.349 2.967 - [2024-07-15 11:28:14.457681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.349 2.982: 98.8419% ( 3) 00:12:40.349 2.982 - 2.996: 98.8741% ( 3) 00:12:40.349 2.996 - 3.011: 98.8848% ( 1) 00:12:40.349 3.025 - 3.040: 98.8956% ( 1) 00:12:40.349 3.040 - 3.055: 98.9063% ( 1) 00:12:40.349 6.342 - 6.371: 98.9170% ( 1) 00:12:40.349 6.545 - 6.575: 98.9277% ( 1) 00:12:40.349 6.865 - 6.895: 98.9385% ( 1) 00:12:40.349 7.215 - 7.244: 98.9492% ( 1) 00:12:40.349 7.622 - 7.680: 98.9599% ( 1) 00:12:40.349 7.855 - 7.913: 98.9813% ( 2) 00:12:40.349 8.320 - 8.378: 98.9921% ( 1) 00:12:40.349 9.251 - 9.309: 99.0028% ( 1) 00:12:40.349 9.309 - 9.367: 99.0135% ( 1) 00:12:40.349 9.600 - 9.658: 99.0242% ( 1) 00:12:40.349 9.833 - 9.891: 99.0350% ( 1) 00:12:40.349 3991.738 - 4021.527: 99.9786% ( 88) 00:12:40.349 4021.527 - 4051.316: 99.9893% ( 1) 00:12:40.349 4230.051 - 4259.840: 100.0000% ( 1) 00:12:40.349 00:12:40.349 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:40.349 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:40.349 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:40.349 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:40.349 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.349 [ 00:12:40.349 { 00:12:40.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.349 "subtype": "Discovery", 00:12:40.349 "listen_addresses": [], 00:12:40.349 "allow_any_host": true, 00:12:40.349 "hosts": [] 00:12:40.349 }, 00:12:40.349 { 00:12:40.349 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:40.349 "subtype": "NVMe", 00:12:40.349 "listen_addresses": [ 00:12:40.349 { 00:12:40.349 "trtype": "VFIOUSER", 00:12:40.349 "adrfam": "IPv4", 00:12:40.349 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:40.349 "trsvcid": "0" 00:12:40.349 } 00:12:40.349 ], 00:12:40.349 "allow_any_host": true, 00:12:40.349 "hosts": [], 00:12:40.349 "serial_number": "SPDK1", 00:12:40.350 "model_number": "SPDK bdev Controller", 00:12:40.350 "max_namespaces": 32, 00:12:40.350 "min_cntlid": 1, 00:12:40.350 "max_cntlid": 65519, 00:12:40.350 "namespaces": [ 00:12:40.350 { 00:12:40.350 "nsid": 1, 00:12:40.350 "bdev_name": "Malloc1", 00:12:40.350 "name": "Malloc1", 00:12:40.350 "nguid": "5A4E7359B821464B90BC3AFB8E83755C", 00:12:40.350 "uuid": "5a4e7359-b821-464b-90bc-3afb8e83755c" 00:12:40.350 }, 00:12:40.350 { 00:12:40.350 "nsid": 2, 00:12:40.350 "bdev_name": "Malloc3", 00:12:40.350 "name": "Malloc3", 00:12:40.350 "nguid": "AAA4DEB2C0E14BFB8613F86BACEC769E", 00:12:40.350 "uuid": "aaa4deb2-c0e1-4bfb-8613-f86bacec769e" 00:12:40.350 } 00:12:40.350 ] 00:12:40.350 }, 00:12:40.350 { 00:12:40.350 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:40.350 "subtype": "NVMe", 00:12:40.350 "listen_addresses": [ 00:12:40.350 { 00:12:40.350 "trtype": "VFIOUSER", 00:12:40.350 "adrfam": "IPv4", 00:12:40.350 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:40.350 "trsvcid": "0" 00:12:40.350 } 00:12:40.350 ], 00:12:40.350 "allow_any_host": true, 00:12:40.350 "hosts": [], 00:12:40.350 "serial_number": "SPDK2", 00:12:40.350 "model_number": "SPDK bdev Controller", 00:12:40.350 "max_namespaces": 32, 00:12:40.350 "min_cntlid": 1, 00:12:40.350 "max_cntlid": 65519, 00:12:40.350 "namespaces": [ 00:12:40.350 { 00:12:40.350 "nsid": 1, 00:12:40.350 "bdev_name": "Malloc2", 00:12:40.350 "name": "Malloc2", 00:12:40.350 "nguid": "DC7A98F45C4A4677BC47A47B3663A758", 00:12:40.350 "uuid": "dc7a98f4-5c4a-4677-bc47-a47b3663a758" 00:12:40.350 } 00:12:40.350 ] 00:12:40.350 } 00:12:40.350 ] 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2710316 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:40.350 11:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:40.608 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.608 [2024-07-15 11:28:14.964088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.608 Malloc4 00:12:40.608 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:40.866 [2024-07-15 11:28:15.286911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.866 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:41.136 Asynchronous Event Request test 00:12:41.136 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:41.136 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:41.136 Registering asynchronous event callbacks... 00:12:41.136 Starting namespace attribute notice tests for all controllers... 00:12:41.136 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:41.136 aer_cb - Changed Namespace 00:12:41.136 Cleaning up... 00:12:41.136 [ 00:12:41.136 { 00:12:41.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.136 "subtype": "Discovery", 00:12:41.136 "listen_addresses": [], 00:12:41.136 "allow_any_host": true, 00:12:41.136 "hosts": [] 00:12:41.136 }, 00:12:41.136 { 00:12:41.136 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:41.136 "subtype": "NVMe", 00:12:41.136 "listen_addresses": [ 00:12:41.136 { 00:12:41.136 "trtype": "VFIOUSER", 00:12:41.136 "adrfam": "IPv4", 00:12:41.136 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:41.136 "trsvcid": "0" 00:12:41.136 } 00:12:41.136 ], 00:12:41.136 "allow_any_host": true, 00:12:41.136 "hosts": [], 00:12:41.136 "serial_number": "SPDK1", 00:12:41.136 "model_number": "SPDK bdev Controller", 00:12:41.136 "max_namespaces": 32, 00:12:41.136 "min_cntlid": 1, 00:12:41.136 "max_cntlid": 65519, 00:12:41.136 "namespaces": [ 00:12:41.136 { 00:12:41.136 "nsid": 1, 00:12:41.136 "bdev_name": "Malloc1", 00:12:41.136 "name": "Malloc1", 00:12:41.136 "nguid": "5A4E7359B821464B90BC3AFB8E83755C", 00:12:41.136 "uuid": "5a4e7359-b821-464b-90bc-3afb8e83755c" 00:12:41.136 }, 00:12:41.136 { 00:12:41.136 "nsid": 2, 00:12:41.136 "bdev_name": "Malloc3", 00:12:41.136 "name": "Malloc3", 00:12:41.136 "nguid": "AAA4DEB2C0E14BFB8613F86BACEC769E", 00:12:41.136 "uuid": "aaa4deb2-c0e1-4bfb-8613-f86bacec769e" 00:12:41.136 } 00:12:41.136 ] 00:12:41.136 }, 00:12:41.136 { 00:12:41.136 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:41.136 "subtype": "NVMe", 00:12:41.136 "listen_addresses": [ 00:12:41.136 { 00:12:41.136 "trtype": "VFIOUSER", 00:12:41.136 "adrfam": "IPv4", 00:12:41.136 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:41.136 "trsvcid": "0" 00:12:41.136 } 00:12:41.136 ], 00:12:41.136 "allow_any_host": true, 00:12:41.136 "hosts": [], 00:12:41.136 "serial_number": "SPDK2", 00:12:41.136 "model_number": "SPDK bdev Controller", 00:12:41.136 "max_namespaces": 32, 00:12:41.136 "min_cntlid": 1, 00:12:41.136 "max_cntlid": 65519, 00:12:41.136 "namespaces": [ 00:12:41.136 { 00:12:41.136 "nsid": 1, 00:12:41.136 "bdev_name": "Malloc2", 00:12:41.136 "name": "Malloc2", 00:12:41.136 "nguid": "DC7A98F45C4A4677BC47A47B3663A758", 00:12:41.136 "uuid": "dc7a98f4-5c4a-4677-bc47-a47b3663a758" 00:12:41.136 }, 00:12:41.136 { 00:12:41.136 "nsid": 2, 00:12:41.136 "bdev_name": "Malloc4", 00:12:41.136 "name": "Malloc4", 00:12:41.136 "nguid": "4AAA8E36D8FA405CA01FD0F35634E180", 00:12:41.136 "uuid": "4aaa8e36-d8fa-405c-a01f-d0f35634e180" 00:12:41.136 } 00:12:41.136 ] 00:12:41.136 } 00:12:41.136 ] 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2710316 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2701751 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2701751 ']' 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2701751 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.136 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2701751 00:12:41.394 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.394 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.394 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2701751' 00:12:41.394 killing process with pid 2701751 00:12:41.394 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2701751 00:12:41.394 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2701751 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2710583 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2710583' 00:12:41.652 Process pid: 2710583 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2710583 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2710583 ']' 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.652 11:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:41.652 [2024-07-15 11:28:15.997472] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:41.652 [2024-07-15 11:28:16.000063] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:12:41.652 [2024-07-15 11:28:16.000145] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.652 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.652 [2024-07-15 11:28:16.115582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.911 [2024-07-15 11:28:16.202057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.911 [2024-07-15 11:28:16.202102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.911 [2024-07-15 11:28:16.202112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.911 [2024-07-15 11:28:16.202121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.911 [2024-07-15 11:28:16.202128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.911 [2024-07-15 11:28:16.202231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.911 [2024-07-15 11:28:16.202344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.911 [2024-07-15 11:28:16.202380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.911 [2024-07-15 11:28:16.202379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.911 [2024-07-15 11:28:16.287486] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:41.911 [2024-07-15 11:28:16.287876] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:41.911 [2024-07-15 11:28:16.288173] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:41.911 [2024-07-15 11:28:16.288331] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:41.911 [2024-07-15 11:28:16.288691] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:42.477 11:28:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.477 11:28:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:42.477 11:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:43.412 11:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:43.670 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:43.670 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:43.670 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:43.670 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:43.670 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:43.928 Malloc1 00:12:43.928 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:44.186 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:44.445 11:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:44.703 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:44.703 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:44.703 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:44.963 Malloc2 00:12:45.222 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:45.480 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:45.738 11:28:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2710583 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2710583 ']' 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2710583 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2710583 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2710583' 00:12:45.997 killing process with pid 2710583 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2710583 00:12:45.997 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2710583 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:46.256 00:12:46.256 real 0m53.651s 00:12:46.256 user 3m31.803s 00:12:46.256 sys 0m4.208s 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:46.256 ************************************ 00:12:46.256 END TEST nvmf_vfio_user 00:12:46.256 ************************************ 00:12:46.256 11:28:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.256 11:28:20 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:46.256 11:28:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.256 11:28:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.256 11:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.256 ************************************ 00:12:46.256 START TEST nvmf_vfio_user_nvme_compliance 00:12:46.256 ************************************ 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:46.256 * Looking for test storage... 00:12:46.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.256 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2711448 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2711448' 00:12:46.257 Process pid: 2711448 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2711448 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2711448 ']' 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.257 11:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.516 [2024-07-15 11:28:20.758728] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:12:46.516 [2024-07-15 11:28:20.758787] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.516 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.516 [2024-07-15 11:28:20.841750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.516 [2024-07-15 11:28:20.930522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.516 [2024-07-15 11:28:20.930568] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.516 [2024-07-15 11:28:20.930578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.516 [2024-07-15 11:28:20.930587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.516 [2024-07-15 11:28:20.930595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.516 [2024-07-15 11:28:20.930651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.516 [2024-07-15 11:28:20.930763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.516 [2024-07-15 11:28:20.930763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.451 11:28:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.451 11:28:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:47.451 11:28:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:48.386 malloc0 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.386 11:28:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:48.386 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.644 00:12:48.644 00:12:48.644 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.644 http://cunit.sourceforge.net/ 00:12:48.644 00:12:48.644 00:12:48.644 Suite: nvme_compliance 00:12:48.644 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 11:28:22.909207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.644 [2024-07-15 11:28:22.910715] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:48.644 [2024-07-15 11:28:22.910740] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:48.644 [2024-07-15 11:28:22.910752] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:48.644 [2024-07-15 11:28:22.912237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.644 passed 00:12:48.644 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 11:28:23.015310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.644 [2024-07-15 11:28:23.018354] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.644 passed 00:12:48.902 Test: admin_identify_ns ...[2024-07-15 11:28:23.122785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.902 [2024-07-15 11:28:23.182276] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:48.902 [2024-07-15 11:28:23.190272] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:48.902 [2024-07-15 11:28:23.211408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.902 passed 00:12:48.902 Test: admin_get_features_mandatory_features ...[2024-07-15 11:28:23.312578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.903 [2024-07-15 11:28:23.317614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.903 passed 00:12:49.160 Test: admin_get_features_optional_features ...[2024-07-15 11:28:23.420588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.160 [2024-07-15 11:28:23.423623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.160 passed 00:12:49.160 Test: admin_set_features_number_of_queues ...[2024-07-15 11:28:23.520768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.417 [2024-07-15 11:28:23.626381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.417 passed 00:12:49.417 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 11:28:23.726425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.417 [2024-07-15 11:28:23.729460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.417 passed 00:12:49.417 Test: admin_get_log_page_with_lpo ...[2024-07-15 11:28:23.833583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.676 [2024-07-15 11:28:23.902282] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:49.676 [2024-07-15 11:28:23.915348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.676 passed 00:12:49.676 Test: fabric_property_get ...[2024-07-15 11:28:24.015448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.676 [2024-07-15 11:28:24.016843] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:49.676 [2024-07-15 11:28:24.018489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.676 passed 00:12:49.676 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 11:28:24.123532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.676 [2024-07-15 11:28:24.125039] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:49.676 [2024-07-15 11:28:24.126586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.955 passed 00:12:49.955 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 11:28:24.224695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:49.955 [2024-07-15 11:28:24.309268] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:49.955 [2024-07-15 11:28:24.325260] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:49.955 [2024-07-15 11:28:24.330363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:49.955 passed 00:12:50.212 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 11:28:24.430380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.212 [2024-07-15 11:28:24.431854] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:50.212 [2024-07-15 11:28:24.433427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.212 passed 00:12:50.212 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 11:28:24.532561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.212 [2024-07-15 11:28:24.609264] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:50.213 [2024-07-15 11:28:24.633281] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:50.213 [2024-07-15 11:28:24.638375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.470 passed 00:12:50.470 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 11:28:24.738443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.470 [2024-07-15 11:28:24.739927] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:50.470 [2024-07-15 11:28:24.739976] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:50.470 [2024-07-15 11:28:24.741484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.470 passed 00:12:50.470 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 11:28:24.840632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.728 [2024-07-15 11:28:24.936269] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:50.728 [2024-07-15 11:28:24.944267] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:50.728 [2024-07-15 11:28:24.952272] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:50.728 [2024-07-15 11:28:24.960265] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:50.728 [2024-07-15 11:28:24.989371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.728 passed 00:12:50.728 Test: admin_create_io_sq_verify_pc ...[2024-07-15 11:28:25.085389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.728 [2024-07-15 11:28:25.105279] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:50.728 [2024-07-15 11:28:25.122785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.729 passed 00:12:50.985 Test: admin_create_io_qp_max_qps ...[2024-07-15 11:28:25.219843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:51.917 [2024-07-15 11:28:26.314267] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:52.494 [2024-07-15 11:28:26.684126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.494 passed 00:12:52.494 Test: admin_create_io_sq_shared_cq ...[2024-07-15 11:28:26.783878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.494 [2024-07-15 11:28:26.917264] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:52.494 [2024-07-15 11:28:26.954356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.756 passed 00:12:52.756 00:12:52.756 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.756 suites 1 1 n/a 0 0 00:12:52.756 tests 18 18 18 0 0 00:12:52.756 asserts 360 360 360 0 n/a 00:12:52.756 00:12:52.756 Elapsed time = 1.709 seconds 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2711448 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2711448 ']' 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2711448 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2711448 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2711448' 00:12:52.756 killing process with pid 2711448 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2711448 00:12:52.756 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2711448 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:53.015 00:12:53.015 real 0m6.715s 00:12:53.015 user 0m19.119s 00:12:53.015 sys 0m0.535s 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:53.015 ************************************ 00:12:53.015 END TEST nvmf_vfio_user_nvme_compliance 00:12:53.015 ************************************ 00:12:53.015 11:28:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.015 11:28:27 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:53.015 11:28:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.015 11:28:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.015 11:28:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.015 ************************************ 00:12:53.015 START TEST nvmf_vfio_user_fuzz 00:12:53.015 ************************************ 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:53.015 * Looking for test storage... 00:12:53.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.015 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.016 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2712806 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2712806' 00:12:53.274 Process pid: 2712806 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2712806 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2712806 ']' 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.274 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:53.533 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.533 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:53.533 11:28:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.468 malloc0 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:54.468 11:28:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:26.577 Fuzzing completed. Shutting down the fuzz application 00:13:26.577 00:13:26.577 Dumping successful admin opcodes: 00:13:26.577 8, 9, 10, 24, 00:13:26.577 Dumping successful io opcodes: 00:13:26.577 0, 00:13:26.577 NS: 0x200003a1ef00 I/O qp, Total commands completed: 591104, total successful commands: 2282, random_seed: 1283531328 00:13:26.577 NS: 0x200003a1ef00 admin qp, Total commands completed: 145208, total successful commands: 1179, random_seed: 2447184320 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2712806 ']' 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712806' 00:13:26.577 killing process with pid 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2712806 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:26.577 00:13:26.577 real 0m32.353s 00:13:26.577 user 0m36.674s 00:13:26.577 sys 0m24.185s 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.577 11:28:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 ************************************ 00:13:26.577 END TEST nvmf_vfio_user_fuzz 00:13:26.577 ************************************ 00:13:26.577 11:28:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:26.577 11:28:59 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:26.577 11:28:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.577 11:28:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.577 11:28:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 ************************************ 00:13:26.577 START TEST nvmf_host_management 00:13:26.577 ************************************ 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:26.577 * Looking for test storage... 00:13:26.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.577 11:28:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.578 11:28:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:31.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:31.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.892 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:31.893 Found net devices under 0000:af:00.0: cvl_0_0 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:31.893 Found net devices under 0000:af:00.1: cvl_0_1 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:31.893 00:13:31.893 --- 10.0.0.2 ping statistics --- 00:13:31.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.893 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:13:31.893 00:13:31.893 --- 10.0.0.1 ping statistics --- 00:13:31.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.893 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2721705 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2721705 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2721705 ']' 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.893 11:29:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:31.893 [2024-07-15 11:29:05.682232] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:13:31.893 [2024-07-15 11:29:05.682294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.893 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.893 [2024-07-15 11:29:05.768379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.893 [2024-07-15 11:29:05.876272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.893 [2024-07-15 11:29:05.876318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.893 [2024-07-15 11:29:05.876333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.893 [2024-07-15 11:29:05.876344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.893 [2024-07-15 11:29:05.876353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.893 [2024-07-15 11:29:05.876480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.893 [2024-07-15 11:29:05.876512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.893 [2024-07-15 11:29:05.876642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:31.893 [2024-07-15 11:29:05.876643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 [2024-07-15 11:29:06.675165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 Malloc0 00:13:32.468 [2024-07-15 11:29:06.745236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2722007 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2722007 /var/tmp/bdevperf.sock 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2722007 ']' 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.468 { 00:13:32.468 "params": { 00:13:32.468 "name": "Nvme$subsystem", 00:13:32.468 "trtype": "$TEST_TRANSPORT", 00:13:32.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.468 "adrfam": "ipv4", 00:13:32.468 "trsvcid": "$NVMF_PORT", 00:13:32.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.468 "hdgst": ${hdgst:-false}, 00:13:32.468 "ddgst": ${ddgst:-false} 00:13:32.468 }, 00:13:32.468 "method": "bdev_nvme_attach_controller" 00:13:32.468 } 00:13:32.468 EOF 00:13:32.468 )") 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:32.468 11:29:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.468 "params": { 00:13:32.468 "name": "Nvme0", 00:13:32.468 "trtype": "tcp", 00:13:32.468 "traddr": "10.0.0.2", 00:13:32.468 "adrfam": "ipv4", 00:13:32.468 "trsvcid": "4420", 00:13:32.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:32.468 "hdgst": false, 00:13:32.468 "ddgst": false 00:13:32.468 }, 00:13:32.468 "method": "bdev_nvme_attach_controller" 00:13:32.468 }' 00:13:32.468 [2024-07-15 11:29:06.841060] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:13:32.468 [2024-07-15 11:29:06.841119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722007 ] 00:13:32.468 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.468 [2024-07-15 11:29:06.922653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.726 [2024-07-15 11:29:07.008391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.983 Running I/O for 10 seconds... 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:32.983 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=54 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 54 -ge 100 ']' 00:13:32.984 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.263 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:33.263 [2024-07-15 11:29:07.717285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.263 [2024-07-15 11:29:07.717490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.717803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ed4e0 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.718859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.264 [2024-07-15 11:29:07.718904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.718918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.264 [2024-07-15 11:29:07.718928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.718940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.264 [2024-07-15 11:29:07.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.718962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.264 [2024-07-15 11:29:07.718972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.718982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe90 is same with the state(5) to be set 00:13:33.264 [2024-07-15 11:29:07.719054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.264 [2024-07-15 11:29:07.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.264 [2024-07-15 11:29:07.719511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.719979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.719990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.265 [2024-07-15 11:29:07.720235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.265 [2024-07-15 11:29:07.720245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.266 [2024-07-15 11:29:07.720557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.266 [2024-07-15 11:29:07.720625] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22b1eb0 was disconnected and freed. reset controller. 00:13:33.266 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.266 [2024-07-15 11:29:07.721982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:33.266 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:33.266 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.266 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:33.266 task offset: 57344 on job bdev=Nvme0n1 fails 00:13:33.266 00:13:33.266 Latency(us) 00:13:33.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.266 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:33.266 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:33.266 Verification LBA range: start 0x0 length 0x400 00:13:33.266 Nvme0n1 : 0.43 1039.96 65.00 148.57 0.00 52037.21 2532.07 54096.99 00:13:33.266 =================================================================================================================== 00:13:33.266 Total : 1039.96 65.00 148.57 0.00 52037.21 2532.07 54096.99 00:13:33.266 [2024-07-15 11:29:07.724281] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:33.266 [2024-07-15 11:29:07.724301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fe90 (9): Bad file descriptor 00:13:33.526 11:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.526 11:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:33.526 [2024-07-15 11:29:07.857551] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2722007 00:13:34.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2722007) - No such process 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:34.463 { 00:13:34.463 "params": { 00:13:34.463 "name": "Nvme$subsystem", 00:13:34.463 "trtype": "$TEST_TRANSPORT", 00:13:34.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:34.463 "adrfam": "ipv4", 00:13:34.463 "trsvcid": "$NVMF_PORT", 00:13:34.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:34.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:34.463 "hdgst": ${hdgst:-false}, 00:13:34.463 "ddgst": ${ddgst:-false} 00:13:34.463 }, 00:13:34.463 "method": "bdev_nvme_attach_controller" 00:13:34.463 } 00:13:34.463 EOF 00:13:34.463 )") 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:34.463 11:29:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:34.463 "params": { 00:13:34.463 "name": "Nvme0", 00:13:34.463 "trtype": "tcp", 00:13:34.463 "traddr": "10.0.0.2", 00:13:34.463 "adrfam": "ipv4", 00:13:34.463 "trsvcid": "4420", 00:13:34.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:34.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:34.463 "hdgst": false, 00:13:34.463 "ddgst": false 00:13:34.463 }, 00:13:34.464 "method": "bdev_nvme_attach_controller" 00:13:34.464 }' 00:13:34.464 [2024-07-15 11:29:08.786437] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:13:34.464 [2024-07-15 11:29:08.786500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722293 ] 00:13:34.464 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.464 [2024-07-15 11:29:08.866102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.722 [2024-07-15 11:29:08.948627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.722 Running I/O for 1 seconds... 00:13:36.100 00:13:36.100 Latency(us) 00:13:36.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:36.100 Verification LBA range: start 0x0 length 0x400 00:13:36.100 Nvme0n1 : 1.03 1123.48 70.22 0.00 0.00 55910.21 6672.76 52667.11 00:13:36.100 =================================================================================================================== 00:13:36.100 Total : 1123.48 70.22 0.00 0.00 55910.21 6672.76 52667.11 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.100 rmmod nvme_tcp 00:13:36.100 rmmod nvme_fabrics 00:13:36.100 rmmod nvme_keyring 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2721705 ']' 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2721705 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2721705 ']' 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2721705 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2721705 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2721705' 00:13:36.100 killing process with pid 2721705 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2721705 00:13:36.100 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2721705 00:13:36.358 [2024-07-15 11:29:10.728423] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:36.358 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.359 11:29:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.906 11:29:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.906 11:29:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:38.906 00:13:38.906 real 0m13.046s 00:13:38.906 user 0m23.569s 00:13:38.906 sys 0m5.559s 00:13:38.906 11:29:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.906 11:29:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.906 ************************************ 00:13:38.906 END TEST nvmf_host_management 00:13:38.906 ************************************ 00:13:38.906 11:29:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.906 11:29:12 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:38.906 11:29:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.907 11:29:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.907 11:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.907 ************************************ 00:13:38.907 START TEST nvmf_lvol 00:13:38.907 ************************************ 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:38.907 * Looking for test storage... 00:13:38.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.907 11:29:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.907 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:38.908 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.909 11:29:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:45.481 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.481 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:45.482 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:45.482 Found net devices under 0000:af:00.0: cvl_0_0 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:45.482 Found net devices under 0000:af:00.1: cvl_0_1 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:13:45.482 00:13:45.482 --- 10.0.0.2 ping statistics --- 00:13:45.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.482 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:45.482 11:29:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:45.482 00:13:45.482 --- 10.0.0.1 ping statistics --- 00:13:45.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.482 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2726275 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2726275 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2726275 ']' 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.482 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.482 [2024-07-15 11:29:19.112341] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:13:45.482 [2024-07-15 11:29:19.112398] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.482 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.482 [2024-07-15 11:29:19.198522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.482 [2024-07-15 11:29:19.289576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.482 [2024-07-15 11:29:19.289618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.482 [2024-07-15 11:29:19.289628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.482 [2024-07-15 11:29:19.289636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.482 [2024-07-15 11:29:19.289644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.482 [2024-07-15 11:29:19.289695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.482 [2024-07-15 11:29:19.289808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.482 [2024-07-15 11:29:19.289809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.740 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.740 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:45.740 11:29:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.740 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.740 11:29:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:45.740 11:29:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.740 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.998 [2024-07-15 11:29:20.245857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.998 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.255 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:46.255 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.512 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:46.512 11:29:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:46.770 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:47.028 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c5e246d7-3ce8-41f3-a40a-41bec2762596 00:13:47.028 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5e246d7-3ce8-41f3-a40a-41bec2762596 lvol 20 00:13:47.286 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d152f8b4-a034-492c-ad7c-bfaa36b792f4 00:13:47.286 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:47.545 11:29:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d152f8b4-a034-492c-ad7c-bfaa36b792f4 00:13:47.803 11:29:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:48.061 [2024-07-15 11:29:22.320176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.061 11:29:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:48.319 11:29:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2726840 00:13:48.319 11:29:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:48.319 11:29:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:48.319 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.254 11:29:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d152f8b4-a034-492c-ad7c-bfaa36b792f4 MY_SNAPSHOT 00:13:49.513 11:29:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5d730a00-b0d7-4c6f-b21f-660fc98b2a84 00:13:49.513 11:29:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d152f8b4-a034-492c-ad7c-bfaa36b792f4 30 00:13:49.771 11:29:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5d730a00-b0d7-4c6f-b21f-660fc98b2a84 MY_CLONE 00:13:50.030 11:29:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a513faa9-96cd-4caa-9ed1-49c3a9a2a093 00:13:50.030 11:29:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a513faa9-96cd-4caa-9ed1-49c3a9a2a093 00:13:50.966 11:29:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2726840 00:13:59.086 Initializing NVMe Controllers 00:13:59.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:59.086 Controller IO queue size 128, less than required. 00:13:59.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:59.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:59.086 Initialization complete. Launching workers. 00:13:59.086 ======================================================== 00:13:59.086 Latency(us) 00:13:59.086 Device Information : IOPS MiB/s Average min max 00:13:59.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7094.87 27.71 18062.27 2293.92 80216.06 00:13:59.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8711.29 34.03 14705.32 4473.55 81573.05 00:13:59.086 ======================================================== 00:13:59.086 Total : 15806.16 61.74 16212.14 2293.92 81573.05 00:13:59.086 00:13:59.086 11:29:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.086 11:29:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d152f8b4-a034-492c-ad7c-bfaa36b792f4 00:13:59.086 11:29:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5e246d7-3ce8-41f3-a40a-41bec2762596 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.345 rmmod nvme_tcp 00:13:59.345 rmmod nvme_fabrics 00:13:59.345 rmmod nvme_keyring 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2726275 ']' 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2726275 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2726275 ']' 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2726275 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2726275 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2726275' 00:13:59.345 killing process with pid 2726275 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2726275 00:13:59.345 11:29:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2726275 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.604 11:29:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.140 00:14:02.140 real 0m23.169s 00:14:02.140 user 1m7.719s 00:14:02.140 sys 0m7.286s 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.140 ************************************ 00:14:02.140 END TEST nvmf_lvol 00:14:02.140 ************************************ 00:14:02.140 11:29:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.140 11:29:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:02.140 11:29:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.140 11:29:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.140 11:29:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.140 ************************************ 00:14:02.140 START TEST nvmf_lvs_grow 00:14:02.140 ************************************ 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:02.140 * Looking for test storage... 00:14:02.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.140 11:29:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.141 11:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:07.417 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:07.417 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:07.417 Found net devices under 0000:af:00.0: cvl_0_0 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:07.417 Found net devices under 0000:af:00.1: cvl_0_1 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.417 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.736 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.736 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.736 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.736 11:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:07.736 00:14:07.736 --- 10.0.0.2 ping statistics --- 00:14:07.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.736 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:14:07.736 00:14:07.736 --- 10.0.0.1 ping statistics --- 00:14:07.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.736 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2732559 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2732559 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2732559 ']' 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.736 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:07.736 [2024-07-15 11:29:42.169413] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:14:07.736 [2024-07-15 11:29:42.169476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.995 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.995 [2024-07-15 11:29:42.258038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.995 [2024-07-15 11:29:42.346712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.995 [2024-07-15 11:29:42.346754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.995 [2024-07-15 11:29:42.346765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.995 [2024-07-15 11:29:42.346773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.995 [2024-07-15 11:29:42.346784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.995 [2024-07-15 11:29:42.346806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.995 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.995 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:07.995 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.995 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.995 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:08.253 11:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.253 11:29:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.253 [2024-07-15 11:29:42.710304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:08.512 ************************************ 00:14:08.512 START TEST lvs_grow_clean 00:14:08.512 ************************************ 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:08.512 11:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:08.770 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:08.770 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:09.029 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:09.029 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:09.029 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:09.288 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:09.288 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:09.288 11:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee lvol 150 00:14:09.855 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c02027c0-989e-4d7c-b59a-212a8aecd9a3 00:14:09.855 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:09.855 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:09.856 [2024-07-15 11:29:44.270631] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:09.856 [2024-07-15 11:29:44.270695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:09.856 true 00:14:09.856 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:09.856 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:10.424 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:10.424 11:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.992 11:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c02027c0-989e-4d7c-b59a-212a8aecd9a3 00:14:11.635 11:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.635 [2024-07-15 11:29:46.003781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.635 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2733391 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2733391 /var/tmp/bdevperf.sock 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2733391 ']' 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.203 11:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:12.203 [2024-07-15 11:29:46.597178] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:14:12.203 [2024-07-15 11:29:46.597307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733391 ] 00:14:12.203 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.462 [2024-07-15 11:29:46.712212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.462 [2024-07-15 11:29:46.817218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.721 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.721 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:12.721 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:13.289 Nvme0n1 00:14:13.289 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:13.547 [ 00:14:13.547 { 00:14:13.547 "name": "Nvme0n1", 00:14:13.547 "aliases": [ 00:14:13.547 "c02027c0-989e-4d7c-b59a-212a8aecd9a3" 00:14:13.547 ], 00:14:13.547 "product_name": "NVMe disk", 00:14:13.547 "block_size": 4096, 00:14:13.547 "num_blocks": 38912, 00:14:13.547 "uuid": "c02027c0-989e-4d7c-b59a-212a8aecd9a3", 00:14:13.547 "assigned_rate_limits": { 00:14:13.547 "rw_ios_per_sec": 0, 00:14:13.547 "rw_mbytes_per_sec": 0, 00:14:13.547 "r_mbytes_per_sec": 0, 00:14:13.547 "w_mbytes_per_sec": 0 00:14:13.547 }, 00:14:13.547 "claimed": false, 00:14:13.547 "zoned": false, 00:14:13.547 "supported_io_types": { 00:14:13.547 "read": true, 00:14:13.547 "write": true, 00:14:13.547 "unmap": true, 00:14:13.547 "flush": true, 00:14:13.547 "reset": true, 00:14:13.547 "nvme_admin": true, 00:14:13.547 "nvme_io": true, 00:14:13.547 "nvme_io_md": false, 00:14:13.547 "write_zeroes": true, 00:14:13.547 "zcopy": false, 00:14:13.547 "get_zone_info": false, 00:14:13.547 "zone_management": false, 00:14:13.547 "zone_append": false, 00:14:13.547 "compare": true, 00:14:13.547 "compare_and_write": true, 00:14:13.547 "abort": true, 00:14:13.547 "seek_hole": false, 00:14:13.547 "seek_data": false, 00:14:13.547 "copy": true, 00:14:13.547 "nvme_iov_md": false 00:14:13.547 }, 00:14:13.547 "memory_domains": [ 00:14:13.547 { 00:14:13.547 "dma_device_id": "system", 00:14:13.547 "dma_device_type": 1 00:14:13.547 } 00:14:13.547 ], 00:14:13.547 "driver_specific": { 00:14:13.547 "nvme": [ 00:14:13.547 { 00:14:13.547 "trid": { 00:14:13.547 "trtype": "TCP", 00:14:13.547 "adrfam": "IPv4", 00:14:13.547 "traddr": "10.0.0.2", 00:14:13.547 "trsvcid": "4420", 00:14:13.547 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:13.547 }, 00:14:13.547 "ctrlr_data": { 00:14:13.547 "cntlid": 1, 00:14:13.547 "vendor_id": "0x8086", 00:14:13.547 "model_number": "SPDK bdev Controller", 00:14:13.547 "serial_number": "SPDK0", 00:14:13.547 "firmware_revision": "24.09", 00:14:13.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.547 "oacs": { 00:14:13.547 "security": 0, 00:14:13.547 "format": 0, 00:14:13.547 "firmware": 0, 00:14:13.547 "ns_manage": 0 00:14:13.547 }, 00:14:13.547 "multi_ctrlr": true, 00:14:13.547 "ana_reporting": false 00:14:13.547 }, 00:14:13.547 "vs": { 00:14:13.547 "nvme_version": "1.3" 00:14:13.547 }, 00:14:13.547 "ns_data": { 00:14:13.547 "id": 1, 00:14:13.547 "can_share": true 00:14:13.547 } 00:14:13.547 } 00:14:13.547 ], 00:14:13.547 "mp_policy": "active_passive" 00:14:13.547 } 00:14:13.547 } 00:14:13.547 ] 00:14:13.547 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2733552 00:14:13.547 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:13.547 11:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.547 Running I/O for 10 seconds... 00:14:14.483 Latency(us) 00:14:14.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.483 Nvme0n1 : 1.00 14570.00 56.91 0.00 0.00 0.00 0.00 0.00 00:14:14.483 =================================================================================================================== 00:14:14.483 Total : 14570.00 56.91 0.00 0.00 0.00 0.00 0.00 00:14:14.483 00:14:15.420 11:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:15.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.678 Nvme0n1 : 2.00 14673.00 57.32 0.00 0.00 0.00 0.00 0.00 00:14:15.678 =================================================================================================================== 00:14:15.678 Total : 14673.00 57.32 0.00 0.00 0.00 0.00 0.00 00:14:15.678 00:14:15.678 true 00:14:15.678 11:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:15.678 11:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:15.936 11:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:15.936 11:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:15.936 11:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2733552 00:14:16.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.503 Nvme0n1 : 3.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:14:16.503 =================================================================================================================== 00:14:16.503 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:14:16.503 00:14:17.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.878 Nvme0n1 : 4.00 14718.50 57.49 0.00 0.00 0.00 0.00 0.00 00:14:17.878 =================================================================================================================== 00:14:17.878 Total : 14718.50 57.49 0.00 0.00 0.00 0.00 0.00 00:14:17.878 00:14:18.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.813 Nvme0n1 : 5.00 14747.60 57.61 0.00 0.00 0.00 0.00 0.00 00:14:18.813 =================================================================================================================== 00:14:18.813 Total : 14747.60 57.61 0.00 0.00 0.00 0.00 0.00 00:14:18.813 00:14:19.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.748 Nvme0n1 : 6.00 14769.67 57.69 0.00 0.00 0.00 0.00 0.00 00:14:19.748 =================================================================================================================== 00:14:19.748 Total : 14769.67 57.69 0.00 0.00 0.00 0.00 0.00 00:14:19.748 00:14:20.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.684 Nvme0n1 : 7.00 14790.00 57.77 0.00 0.00 0.00 0.00 0.00 00:14:20.684 =================================================================================================================== 00:14:20.684 Total : 14790.00 57.77 0.00 0.00 0.00 0.00 0.00 00:14:20.684 00:14:21.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.621 Nvme0n1 : 8.00 14805.25 57.83 0.00 0.00 0.00 0.00 0.00 00:14:21.621 =================================================================================================================== 00:14:21.621 Total : 14805.25 57.83 0.00 0.00 0.00 0.00 0.00 00:14:21.621 00:14:22.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.557 Nvme0n1 : 9.00 14820.67 57.89 0.00 0.00 0.00 0.00 0.00 00:14:22.557 =================================================================================================================== 00:14:22.557 Total : 14820.67 57.89 0.00 0.00 0.00 0.00 0.00 00:14:22.557 00:14:23.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.493 Nvme0n1 : 10.00 14833.00 57.94 0.00 0.00 0.00 0.00 0.00 00:14:23.493 =================================================================================================================== 00:14:23.493 Total : 14833.00 57.94 0.00 0.00 0.00 0.00 0.00 00:14:23.493 00:14:23.493 00:14:23.493 Latency(us) 00:14:23.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.493 Nvme0n1 : 10.01 14832.29 57.94 0.00 0.00 8620.56 5987.61 15728.64 00:14:23.493 =================================================================================================================== 00:14:23.493 Total : 14832.29 57.94 0.00 0.00 8620.56 5987.61 15728.64 00:14:23.493 0 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2733391 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2733391 ']' 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2733391 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.753 11:29:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2733391 00:14:23.753 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.753 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.753 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2733391' 00:14:23.753 killing process with pid 2733391 00:14:23.753 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2733391 00:14:23.753 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.753 00:14:23.753 Latency(us) 00:14:23.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.753 =================================================================================================================== 00:14:23.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.753 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2733391 00:14:24.013 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.272 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:24.530 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:24.530 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:24.530 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:24.530 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:24.530 11:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:25.096 [2024-07-15 11:29:59.454985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.096 11:29:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:25.662 request: 00:14:25.662 { 00:14:25.662 "uuid": "5f77fcac-4b81-419d-ba07-8fbcfcbf7eee", 00:14:25.662 "method": "bdev_lvol_get_lvstores", 00:14:25.662 "req_id": 1 00:14:25.662 } 00:14:25.662 Got JSON-RPC error response 00:14:25.662 response: 00:14:25.662 { 00:14:25.662 "code": -19, 00:14:25.662 "message": "No such device" 00:14:25.662 } 00:14:25.663 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:25.663 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.663 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.663 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.663 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.921 aio_bdev 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c02027c0-989e-4d7c-b59a-212a8aecd9a3 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c02027c0-989e-4d7c-b59a-212a8aecd9a3 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:25.921 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:26.180 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c02027c0-989e-4d7c-b59a-212a8aecd9a3 -t 2000 00:14:26.439 [ 00:14:26.439 { 00:14:26.439 "name": "c02027c0-989e-4d7c-b59a-212a8aecd9a3", 00:14:26.439 "aliases": [ 00:14:26.439 "lvs/lvol" 00:14:26.439 ], 00:14:26.439 "product_name": "Logical Volume", 00:14:26.439 "block_size": 4096, 00:14:26.439 "num_blocks": 38912, 00:14:26.439 "uuid": "c02027c0-989e-4d7c-b59a-212a8aecd9a3", 00:14:26.439 "assigned_rate_limits": { 00:14:26.439 "rw_ios_per_sec": 0, 00:14:26.439 "rw_mbytes_per_sec": 0, 00:14:26.439 "r_mbytes_per_sec": 0, 00:14:26.439 "w_mbytes_per_sec": 0 00:14:26.439 }, 00:14:26.439 "claimed": false, 00:14:26.439 "zoned": false, 00:14:26.439 "supported_io_types": { 00:14:26.439 "read": true, 00:14:26.439 "write": true, 00:14:26.439 "unmap": true, 00:14:26.439 "flush": false, 00:14:26.439 "reset": true, 00:14:26.439 "nvme_admin": false, 00:14:26.439 "nvme_io": false, 00:14:26.439 "nvme_io_md": false, 00:14:26.439 "write_zeroes": true, 00:14:26.439 "zcopy": false, 00:14:26.439 "get_zone_info": false, 00:14:26.439 "zone_management": false, 00:14:26.439 "zone_append": false, 00:14:26.439 "compare": false, 00:14:26.439 "compare_and_write": false, 00:14:26.439 "abort": false, 00:14:26.439 "seek_hole": true, 00:14:26.439 "seek_data": true, 00:14:26.439 "copy": false, 00:14:26.439 "nvme_iov_md": false 00:14:26.439 }, 00:14:26.439 "driver_specific": { 00:14:26.439 "lvol": { 00:14:26.439 "lvol_store_uuid": "5f77fcac-4b81-419d-ba07-8fbcfcbf7eee", 00:14:26.439 "base_bdev": "aio_bdev", 00:14:26.439 "thin_provision": false, 00:14:26.439 "num_allocated_clusters": 38, 00:14:26.439 "snapshot": false, 00:14:26.439 "clone": false, 00:14:26.439 "esnap_clone": false 00:14:26.439 } 00:14:26.439 } 00:14:26.439 } 00:14:26.439 ] 00:14:26.439 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:26.439 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:26.439 11:30:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:26.698 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:26.698 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:26.698 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:26.955 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:26.955 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c02027c0-989e-4d7c-b59a-212a8aecd9a3 00:14:27.213 11:30:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f77fcac-4b81-419d-ba07-8fbcfcbf7eee 00:14:27.778 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.346 00:14:28.346 real 0m19.778s 00:14:28.346 user 0m19.644s 00:14:28.346 sys 0m1.872s 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:28.346 ************************************ 00:14:28.346 END TEST lvs_grow_clean 00:14:28.346 ************************************ 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:28.346 ************************************ 00:14:28.346 START TEST lvs_grow_dirty 00:14:28.346 ************************************ 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.346 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.605 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:28.605 11:30:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:28.866 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=97a4290a-260a-4af2-940d-9a834222c37a 00:14:28.866 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:28.866 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:29.124 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:29.124 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:29.124 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97a4290a-260a-4af2-940d-9a834222c37a lvol 150 00:14:29.383 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:29.383 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.383 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:29.641 [2024-07-15 11:30:03.851853] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:29.641 [2024-07-15 11:30:03.851916] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:29.641 true 00:14:29.641 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:29.641 11:30:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:29.900 11:30:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:29.900 11:30:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:29.900 11:30:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:30.468 11:30:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.727 [2024-07-15 11:30:05.079512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.728 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2736978 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2736978 /var/tmp/bdevperf.sock 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2736978 ']' 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.295 11:30:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:31.295 [2024-07-15 11:30:05.642154] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:14:31.295 [2024-07-15 11:30:05.642214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736978 ] 00:14:31.295 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.295 [2024-07-15 11:30:05.722767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.554 [2024-07-15 11:30:05.829412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.813 11:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.813 11:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:31.813 11:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:32.381 Nvme0n1 00:14:32.640 11:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:32.640 [ 00:14:32.640 { 00:14:32.640 "name": "Nvme0n1", 00:14:32.640 "aliases": [ 00:14:32.640 "3a48fc06-7af4-4f3b-9a28-a52344827079" 00:14:32.640 ], 00:14:32.641 "product_name": "NVMe disk", 00:14:32.641 "block_size": 4096, 00:14:32.641 "num_blocks": 38912, 00:14:32.641 "uuid": "3a48fc06-7af4-4f3b-9a28-a52344827079", 00:14:32.641 "assigned_rate_limits": { 00:14:32.641 "rw_ios_per_sec": 0, 00:14:32.641 "rw_mbytes_per_sec": 0, 00:14:32.641 "r_mbytes_per_sec": 0, 00:14:32.641 "w_mbytes_per_sec": 0 00:14:32.641 }, 00:14:32.641 "claimed": false, 00:14:32.641 "zoned": false, 00:14:32.641 "supported_io_types": { 00:14:32.641 "read": true, 00:14:32.641 "write": true, 00:14:32.641 "unmap": true, 00:14:32.641 "flush": true, 00:14:32.641 "reset": true, 00:14:32.641 "nvme_admin": true, 00:14:32.641 "nvme_io": true, 00:14:32.641 "nvme_io_md": false, 00:14:32.641 "write_zeroes": true, 00:14:32.641 "zcopy": false, 00:14:32.641 "get_zone_info": false, 00:14:32.641 "zone_management": false, 00:14:32.641 "zone_append": false, 00:14:32.641 "compare": true, 00:14:32.641 "compare_and_write": true, 00:14:32.641 "abort": true, 00:14:32.641 "seek_hole": false, 00:14:32.641 "seek_data": false, 00:14:32.641 "copy": true, 00:14:32.641 "nvme_iov_md": false 00:14:32.641 }, 00:14:32.641 "memory_domains": [ 00:14:32.641 { 00:14:32.641 "dma_device_id": "system", 00:14:32.641 "dma_device_type": 1 00:14:32.641 } 00:14:32.641 ], 00:14:32.641 "driver_specific": { 00:14:32.641 "nvme": [ 00:14:32.641 { 00:14:32.641 "trid": { 00:14:32.641 "trtype": "TCP", 00:14:32.641 "adrfam": "IPv4", 00:14:32.641 "traddr": "10.0.0.2", 00:14:32.641 "trsvcid": "4420", 00:14:32.641 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:32.641 }, 00:14:32.641 "ctrlr_data": { 00:14:32.641 "cntlid": 1, 00:14:32.641 "vendor_id": "0x8086", 00:14:32.641 "model_number": "SPDK bdev Controller", 00:14:32.641 "serial_number": "SPDK0", 00:14:32.641 "firmware_revision": "24.09", 00:14:32.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.641 "oacs": { 00:14:32.641 "security": 0, 00:14:32.641 "format": 0, 00:14:32.641 "firmware": 0, 00:14:32.641 "ns_manage": 0 00:14:32.641 }, 00:14:32.641 "multi_ctrlr": true, 00:14:32.641 "ana_reporting": false 00:14:32.641 }, 00:14:32.641 "vs": { 00:14:32.641 "nvme_version": "1.3" 00:14:32.641 }, 00:14:32.641 "ns_data": { 00:14:32.641 "id": 1, 00:14:32.641 "can_share": true 00:14:32.641 } 00:14:32.641 } 00:14:32.641 ], 00:14:32.641 "mp_policy": "active_passive" 00:14:32.641 } 00:14:32.641 } 00:14:32.641 ] 00:14:32.900 11:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2737269 00:14:32.900 11:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:32.900 11:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.900 Running I/O for 10 seconds... 00:14:34.277 Latency(us) 00:14:34.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.277 Nvme0n1 : 1.00 15243.00 59.54 0.00 0.00 0.00 0.00 0.00 00:14:34.277 =================================================================================================================== 00:14:34.277 Total : 15243.00 59.54 0.00 0.00 0.00 0.00 0.00 00:14:34.277 00:14:34.845 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:35.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.102 Nvme0n1 : 2.00 15306.50 59.79 0.00 0.00 0.00 0.00 0.00 00:14:35.102 =================================================================================================================== 00:14:35.102 Total : 15306.50 59.79 0.00 0.00 0.00 0.00 0.00 00:14:35.102 00:14:35.360 true 00:14:35.360 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:35.360 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:35.619 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:35.619 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:35.619 11:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2737269 00:14:35.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.877 Nvme0n1 : 3.00 15332.33 59.89 0.00 0.00 0.00 0.00 0.00 00:14:35.877 =================================================================================================================== 00:14:35.877 Total : 15332.33 59.89 0.00 0.00 0.00 0.00 0.00 00:14:35.878 00:14:37.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.252 Nvme0n1 : 4.00 15375.00 60.06 0.00 0.00 0.00 0.00 0.00 00:14:37.252 =================================================================================================================== 00:14:37.252 Total : 15375.00 60.06 0.00 0.00 0.00 0.00 0.00 00:14:37.252 00:14:38.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.188 Nvme0n1 : 5.00 15398.80 60.15 0.00 0.00 0.00 0.00 0.00 00:14:38.188 =================================================================================================================== 00:14:38.188 Total : 15398.80 60.15 0.00 0.00 0.00 0.00 0.00 00:14:38.188 00:14:39.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.124 Nvme0n1 : 6.00 15416.17 60.22 0.00 0.00 0.00 0.00 0.00 00:14:39.124 =================================================================================================================== 00:14:39.124 Total : 15416.17 60.22 0.00 0.00 0.00 0.00 0.00 00:14:39.124 00:14:40.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.074 Nvme0n1 : 7.00 15436.43 60.30 0.00 0.00 0.00 0.00 0.00 00:14:40.074 =================================================================================================================== 00:14:40.074 Total : 15436.43 60.30 0.00 0.00 0.00 0.00 0.00 00:14:40.074 00:14:41.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.047 Nvme0n1 : 8.00 15452.38 60.36 0.00 0.00 0.00 0.00 0.00 00:14:41.048 =================================================================================================================== 00:14:41.048 Total : 15452.38 60.36 0.00 0.00 0.00 0.00 0.00 00:14:41.048 00:14:41.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.984 Nvme0n1 : 9.00 15464.11 60.41 0.00 0.00 0.00 0.00 0.00 00:14:41.984 =================================================================================================================== 00:14:41.984 Total : 15464.11 60.41 0.00 0.00 0.00 0.00 0.00 00:14:41.984 00:14:42.927 00:14:42.927 Latency(us) 00:14:42.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.927 Nvme0n1 : 10.00 15475.05 60.45 0.00 0.00 8265.03 2725.70 15966.95 00:14:42.927 =================================================================================================================== 00:14:42.927 Total : 15475.05 60.45 0.00 0.00 8265.03 2725.70 15966.95 00:14:42.927 0 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2736978 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2736978 ']' 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2736978 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.927 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2736978 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2736978' 00:14:43.186 killing process with pid 2736978 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2736978 00:14:43.186 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.186 00:14:43.186 Latency(us) 00:14:43.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.186 =================================================================================================================== 00:14:43.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2736978 00:14:43.186 11:30:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.753 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:44.321 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:44.321 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2732559 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2732559 00:14:44.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2732559 Killed "${NVMF_APP[@]}" "$@" 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2739829 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2739829 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2739829 ']' 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.580 11:30:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:44.580 [2024-07-15 11:30:19.026865] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:14:44.580 [2024-07-15 11:30:19.026927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.839 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.839 [2024-07-15 11:30:19.117121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.839 [2024-07-15 11:30:19.202775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.839 [2024-07-15 11:30:19.202823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.839 [2024-07-15 11:30:19.202833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.839 [2024-07-15 11:30:19.202841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.839 [2024-07-15 11:30:19.202849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.839 [2024-07-15 11:30:19.202871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.776 11:30:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.776 [2024-07-15 11:30:20.147211] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:45.776 [2024-07-15 11:30:20.147328] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:45.776 [2024-07-15 11:30:20.147366] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.776 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:46.035 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3a48fc06-7af4-4f3b-9a28-a52344827079 -t 2000 00:14:46.602 [ 00:14:46.602 { 00:14:46.602 "name": "3a48fc06-7af4-4f3b-9a28-a52344827079", 00:14:46.602 "aliases": [ 00:14:46.602 "lvs/lvol" 00:14:46.602 ], 00:14:46.602 "product_name": "Logical Volume", 00:14:46.602 "block_size": 4096, 00:14:46.602 "num_blocks": 38912, 00:14:46.602 "uuid": "3a48fc06-7af4-4f3b-9a28-a52344827079", 00:14:46.602 "assigned_rate_limits": { 00:14:46.602 "rw_ios_per_sec": 0, 00:14:46.602 "rw_mbytes_per_sec": 0, 00:14:46.602 "r_mbytes_per_sec": 0, 00:14:46.602 "w_mbytes_per_sec": 0 00:14:46.602 }, 00:14:46.602 "claimed": false, 00:14:46.602 "zoned": false, 00:14:46.602 "supported_io_types": { 00:14:46.602 "read": true, 00:14:46.602 "write": true, 00:14:46.602 "unmap": true, 00:14:46.602 "flush": false, 00:14:46.602 "reset": true, 00:14:46.602 "nvme_admin": false, 00:14:46.602 "nvme_io": false, 00:14:46.602 "nvme_io_md": false, 00:14:46.602 "write_zeroes": true, 00:14:46.602 "zcopy": false, 00:14:46.602 "get_zone_info": false, 00:14:46.602 "zone_management": false, 00:14:46.602 "zone_append": false, 00:14:46.602 "compare": false, 00:14:46.602 "compare_and_write": false, 00:14:46.602 "abort": false, 00:14:46.602 "seek_hole": true, 00:14:46.602 "seek_data": true, 00:14:46.602 "copy": false, 00:14:46.602 "nvme_iov_md": false 00:14:46.602 }, 00:14:46.602 "driver_specific": { 00:14:46.602 "lvol": { 00:14:46.602 "lvol_store_uuid": "97a4290a-260a-4af2-940d-9a834222c37a", 00:14:46.602 "base_bdev": "aio_bdev", 00:14:46.602 "thin_provision": false, 00:14:46.602 "num_allocated_clusters": 38, 00:14:46.602 "snapshot": false, 00:14:46.602 "clone": false, 00:14:46.602 "esnap_clone": false 00:14:46.602 } 00:14:46.602 } 00:14:46.602 } 00:14:46.602 ] 00:14:46.602 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:46.602 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:46.602 11:30:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:46.861 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:46.861 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:46.861 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:47.120 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:47.120 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:47.688 [2024-07-15 11:30:21.885512] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.688 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.689 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:47.689 11:30:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:47.947 request: 00:14:47.947 { 00:14:47.947 "uuid": "97a4290a-260a-4af2-940d-9a834222c37a", 00:14:47.947 "method": "bdev_lvol_get_lvstores", 00:14:47.947 "req_id": 1 00:14:47.947 } 00:14:47.947 Got JSON-RPC error response 00:14:47.947 response: 00:14:47.947 { 00:14:47.947 "code": -19, 00:14:47.947 "message": "No such device" 00:14:47.947 } 00:14:47.947 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:47.947 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.947 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.947 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.947 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.205 aio_bdev 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.463 11:30:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.721 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3a48fc06-7af4-4f3b-9a28-a52344827079 -t 2000 00:14:48.979 [ 00:14:48.979 { 00:14:48.979 "name": "3a48fc06-7af4-4f3b-9a28-a52344827079", 00:14:48.979 "aliases": [ 00:14:48.979 "lvs/lvol" 00:14:48.979 ], 00:14:48.979 "product_name": "Logical Volume", 00:14:48.979 "block_size": 4096, 00:14:48.979 "num_blocks": 38912, 00:14:48.979 "uuid": "3a48fc06-7af4-4f3b-9a28-a52344827079", 00:14:48.979 "assigned_rate_limits": { 00:14:48.979 "rw_ios_per_sec": 0, 00:14:48.979 "rw_mbytes_per_sec": 0, 00:14:48.979 "r_mbytes_per_sec": 0, 00:14:48.979 "w_mbytes_per_sec": 0 00:14:48.979 }, 00:14:48.979 "claimed": false, 00:14:48.979 "zoned": false, 00:14:48.979 "supported_io_types": { 00:14:48.979 "read": true, 00:14:48.979 "write": true, 00:14:48.979 "unmap": true, 00:14:48.979 "flush": false, 00:14:48.979 "reset": true, 00:14:48.979 "nvme_admin": false, 00:14:48.979 "nvme_io": false, 00:14:48.979 "nvme_io_md": false, 00:14:48.979 "write_zeroes": true, 00:14:48.979 "zcopy": false, 00:14:48.979 "get_zone_info": false, 00:14:48.979 "zone_management": false, 00:14:48.979 "zone_append": false, 00:14:48.979 "compare": false, 00:14:48.979 "compare_and_write": false, 00:14:48.979 "abort": false, 00:14:48.979 "seek_hole": true, 00:14:48.979 "seek_data": true, 00:14:48.979 "copy": false, 00:14:48.979 "nvme_iov_md": false 00:14:48.979 }, 00:14:48.979 "driver_specific": { 00:14:48.979 "lvol": { 00:14:48.979 "lvol_store_uuid": "97a4290a-260a-4af2-940d-9a834222c37a", 00:14:48.979 "base_bdev": "aio_bdev", 00:14:48.979 "thin_provision": false, 00:14:48.979 "num_allocated_clusters": 38, 00:14:48.979 "snapshot": false, 00:14:48.979 "clone": false, 00:14:48.979 "esnap_clone": false 00:14:48.979 } 00:14:48.979 } 00:14:48.979 } 00:14:48.979 ] 00:14:48.979 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:48.979 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:48.979 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:49.236 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:49.236 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:49.236 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:49.494 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:49.494 11:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a48fc06-7af4-4f3b-9a28-a52344827079 00:14:50.060 11:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97a4290a-260a-4af2-940d-9a834222c37a 00:14:50.627 11:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:51.194 00:14:51.194 real 0m22.858s 00:14:51.194 user 0m56.681s 00:14:51.194 sys 0m4.227s 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.194 ************************************ 00:14:51.194 END TEST lvs_grow_dirty 00:14:51.194 ************************************ 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:51.194 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:51.195 nvmf_trace.0 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.195 rmmod nvme_tcp 00:14:51.195 rmmod nvme_fabrics 00:14:51.195 rmmod nvme_keyring 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2739829 ']' 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2739829 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2739829 ']' 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2739829 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.195 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739829 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739829' 00:14:51.454 killing process with pid 2739829 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2739829 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2739829 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.454 11:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.989 11:30:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.989 00:14:53.989 real 0m51.788s 00:14:53.989 user 1m25.174s 00:14:53.989 sys 0m10.911s 00:14:53.989 11:30:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.989 11:30:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:53.989 ************************************ 00:14:53.989 END TEST nvmf_lvs_grow 00:14:53.989 ************************************ 00:14:53.989 11:30:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:53.989 11:30:27 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:53.989 11:30:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:53.989 11:30:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.989 11:30:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.989 ************************************ 00:14:53.989 START TEST nvmf_bdev_io_wait 00:14:53.989 ************************************ 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:53.989 * Looking for test storage... 00:14:53.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.989 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.990 11:30:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.264 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:59.265 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:59.265 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:59.265 Found net devices under 0000:af:00.0: cvl_0_0 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:59.265 Found net devices under 0000:af:00.1: cvl_0_1 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.265 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:14:59.523 00:14:59.523 --- 10.0.0.2 ping statistics --- 00:14:59.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.523 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:59.523 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:14:59.523 00:14:59.523 --- 10.0.0.1 ping statistics --- 00:14:59.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.523 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2744655 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2744655 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2744655 ']' 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.524 11:30:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.782 [2024-07-15 11:30:34.001686] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:14:59.782 [2024-07-15 11:30:34.001740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.782 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.782 [2024-07-15 11:30:34.086638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.782 [2024-07-15 11:30:34.182235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.782 [2024-07-15 11:30:34.182282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.782 [2024-07-15 11:30:34.182293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.782 [2024-07-15 11:30:34.182302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.782 [2024-07-15 11:30:34.182313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.782 [2024-07-15 11:30:34.182362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.782 [2024-07-15 11:30:34.182474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.782 [2024-07-15 11:30:34.182610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.782 [2024-07-15 11:30:34.182610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 [2024-07-15 11:30:35.069199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 Malloc0 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.718 [2024-07-15 11:30:35.148743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2744879 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2744882 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:00.718 { 00:15:00.718 "params": { 00:15:00.718 "name": "Nvme$subsystem", 00:15:00.718 "trtype": "$TEST_TRANSPORT", 00:15:00.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.718 "adrfam": "ipv4", 00:15:00.718 "trsvcid": "$NVMF_PORT", 00:15:00.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.718 "hdgst": ${hdgst:-false}, 00:15:00.718 "ddgst": ${ddgst:-false} 00:15:00.718 }, 00:15:00.718 "method": "bdev_nvme_attach_controller" 00:15:00.718 } 00:15:00.718 EOF 00:15:00.718 )") 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2744886 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:00.718 { 00:15:00.718 "params": { 00:15:00.718 "name": "Nvme$subsystem", 00:15:00.718 "trtype": "$TEST_TRANSPORT", 00:15:00.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.718 "adrfam": "ipv4", 00:15:00.718 "trsvcid": "$NVMF_PORT", 00:15:00.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.718 "hdgst": ${hdgst:-false}, 00:15:00.718 "ddgst": ${ddgst:-false} 00:15:00.718 }, 00:15:00.718 "method": "bdev_nvme_attach_controller" 00:15:00.718 } 00:15:00.718 EOF 00:15:00.718 )") 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2744890 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:00.718 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:00.718 { 00:15:00.718 "params": { 00:15:00.718 "name": "Nvme$subsystem", 00:15:00.718 "trtype": "$TEST_TRANSPORT", 00:15:00.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "$NVMF_PORT", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.719 "hdgst": ${hdgst:-false}, 00:15:00.719 "ddgst": ${ddgst:-false} 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 } 00:15:00.719 EOF 00:15:00.719 )") 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:00.719 { 00:15:00.719 "params": { 00:15:00.719 "name": "Nvme$subsystem", 00:15:00.719 "trtype": "$TEST_TRANSPORT", 00:15:00.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "$NVMF_PORT", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.719 "hdgst": ${hdgst:-false}, 00:15:00.719 "ddgst": ${ddgst:-false} 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 } 00:15:00.719 EOF 00:15:00.719 )") 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2744879 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:00.719 "params": { 00:15:00.719 "name": "Nvme1", 00:15:00.719 "trtype": "tcp", 00:15:00.719 "traddr": "10.0.0.2", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "4420", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.719 "hdgst": false, 00:15:00.719 "ddgst": false 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 }' 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:00.719 "params": { 00:15:00.719 "name": "Nvme1", 00:15:00.719 "trtype": "tcp", 00:15:00.719 "traddr": "10.0.0.2", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "4420", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.719 "hdgst": false, 00:15:00.719 "ddgst": false 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 }' 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:00.719 "params": { 00:15:00.719 "name": "Nvme1", 00:15:00.719 "trtype": "tcp", 00:15:00.719 "traddr": "10.0.0.2", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "4420", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.719 "hdgst": false, 00:15:00.719 "ddgst": false 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 }' 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:00.719 11:30:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:00.719 "params": { 00:15:00.719 "name": "Nvme1", 00:15:00.719 "trtype": "tcp", 00:15:00.719 "traddr": "10.0.0.2", 00:15:00.719 "adrfam": "ipv4", 00:15:00.719 "trsvcid": "4420", 00:15:00.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.719 "hdgst": false, 00:15:00.719 "ddgst": false 00:15:00.719 }, 00:15:00.719 "method": "bdev_nvme_attach_controller" 00:15:00.719 }' 00:15:00.978 [2024-07-15 11:30:35.200269] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:00.978 [2024-07-15 11:30:35.200328] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:00.978 [2024-07-15 11:30:35.202121] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:00.978 [2024-07-15 11:30:35.202173] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:00.978 [2024-07-15 11:30:35.204618] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:00.978 [2024-07-15 11:30:35.204624] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:00.978 [2024-07-15 11:30:35.204686] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 11:30:35.204687] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:00.978 --proc-type=auto ] 00:15:00.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.978 [2024-07-15 11:30:35.399375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.978 [2024-07-15 11:30:35.430422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.236 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.236 [2024-07-15 11:30:35.513373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:01.236 [2024-07-15 11:30:35.522495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.236 [2024-07-15 11:30:35.540957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:01.236 [2024-07-15 11:30:35.612450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:01.236 [2024-07-15 11:30:35.622592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.494 [2024-07-15 11:30:35.724625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:01.494 Running I/O for 1 seconds... 00:15:01.494 Running I/O for 1 seconds... 00:15:01.494 Running I/O for 1 seconds... 00:15:01.751 Running I/O for 1 seconds... 00:15:02.686 00:15:02.686 Latency(us) 00:15:02.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.686 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:02.686 Nvme1n1 : 1.04 2855.84 11.16 0.00 0.00 44088.67 13524.25 75783.45 00:15:02.686 =================================================================================================================== 00:15:02.686 Total : 2855.84 11.16 0.00 0.00 44088.67 13524.25 75783.45 00:15:02.686 00:15:02.686 Latency(us) 00:15:02.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.686 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:02.686 Nvme1n1 : 1.01 8680.07 33.91 0.00 0.00 14679.85 8102.63 27763.43 00:15:02.686 =================================================================================================================== 00:15:02.686 Total : 8680.07 33.91 0.00 0.00 14679.85 8102.63 27763.43 00:15:02.686 00:15:02.686 Latency(us) 00:15:02.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.686 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:02.686 Nvme1n1 : 1.01 2771.87 10.83 0.00 0.00 45892.16 11200.70 91988.71 00:15:02.686 =================================================================================================================== 00:15:02.686 Total : 2771.87 10.83 0.00 0.00 45892.16 11200.70 91988.71 00:15:02.686 00:15:02.686 Latency(us) 00:15:02.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.686 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:02.686 Nvme1n1 : 1.00 163070.93 637.00 0.00 0.00 781.46 316.51 923.46 00:15:02.686 =================================================================================================================== 00:15:02.686 Total : 163070.93 637.00 0.00 0.00 781.46 316.51 923.46 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2744882 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2744886 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2744890 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:02.944 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.945 rmmod nvme_tcp 00:15:02.945 rmmod nvme_fabrics 00:15:02.945 rmmod nvme_keyring 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2744655 ']' 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2744655 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2744655 ']' 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2744655 00:15:02.945 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2744655 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2744655' 00:15:03.203 killing process with pid 2744655 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2744655 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2744655 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.203 11:30:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.737 11:30:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.737 00:15:05.737 real 0m11.722s 00:15:05.737 user 0m21.641s 00:15:05.737 sys 0m6.116s 00:15:05.737 11:30:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.737 11:30:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.737 ************************************ 00:15:05.737 END TEST nvmf_bdev_io_wait 00:15:05.737 ************************************ 00:15:05.737 11:30:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:05.737 11:30:39 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:05.737 11:30:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:05.737 11:30:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.737 11:30:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.737 ************************************ 00:15:05.737 START TEST nvmf_queue_depth 00:15:05.737 ************************************ 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:05.737 * Looking for test storage... 00:15:05.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.737 11:30:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.011 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.012 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.012 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.012 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:15:11.271 00:15:11.271 --- 10.0.0.2 ping statistics --- 00:15:11.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.271 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:15:11.271 00:15:11.271 --- 10.0.0.1 ping statistics --- 00:15:11.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.271 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.271 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2748954 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2748954 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2748954 ']' 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.530 11:30:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:11.530 [2024-07-15 11:30:45.793576] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:11.530 [2024-07-15 11:30:45.793635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.530 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.530 [2024-07-15 11:30:45.880004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.530 [2024-07-15 11:30:45.982386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.530 [2024-07-15 11:30:45.982434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.530 [2024-07-15 11:30:45.982448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.530 [2024-07-15 11:30:45.982459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.530 [2024-07-15 11:30:45.982470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.530 [2024-07-15 11:30:45.982496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 [2024-07-15 11:30:46.790440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 Malloc0 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.468 [2024-07-15 11:30:46.867550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2749037 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2749037 /var/tmp/bdevperf.sock 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2749037 ']' 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.468 11:30:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:12.728 [2024-07-15 11:30:46.953405] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:12.728 [2024-07-15 11:30:46.953515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749037 ] 00:15:12.728 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.728 [2024-07-15 11:30:47.069271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.728 [2024-07-15 11:30:47.159908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.664 11:30:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.664 11:30:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:13.664 11:30:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:13.664 11:30:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.664 11:30:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:13.923 NVMe0n1 00:15:13.923 11:30:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.923 11:30:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.181 Running I/O for 10 seconds... 00:15:24.259 00:15:24.259 Latency(us) 00:15:24.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.259 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:24.259 Verification LBA range: start 0x0 length 0x4000 00:15:24.259 NVMe0n1 : 10.10 6611.50 25.83 0.00 0.00 153912.19 17754.30 92941.96 00:15:24.259 =================================================================================================================== 00:15:24.259 Total : 6611.50 25.83 0.00 0.00 153912.19 17754.30 92941.96 00:15:24.259 0 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2749037 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2749037 ']' 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2749037 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2749037 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2749037' 00:15:24.259 killing process with pid 2749037 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2749037 00:15:24.259 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.259 00:15:24.259 Latency(us) 00:15:24.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.259 =================================================================================================================== 00:15:24.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.259 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2749037 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.519 rmmod nvme_tcp 00:15:24.519 rmmod nvme_fabrics 00:15:24.519 rmmod nvme_keyring 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2748954 ']' 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2748954 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2748954 ']' 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2748954 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2748954 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2748954' 00:15:24.519 killing process with pid 2748954 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2748954 00:15:24.519 11:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2748954 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.778 11:30:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.314 11:31:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.314 00:15:27.314 real 0m21.499s 00:15:27.314 user 0m27.086s 00:15:27.314 sys 0m5.833s 00:15:27.314 11:31:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.314 11:31:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.314 ************************************ 00:15:27.314 END TEST nvmf_queue_depth 00:15:27.314 ************************************ 00:15:27.314 11:31:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.314 11:31:01 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.314 11:31:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.314 11:31:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.314 11:31:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.314 ************************************ 00:15:27.314 START TEST nvmf_target_multipath 00:15:27.314 ************************************ 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.314 * Looking for test storage... 00:15:27.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:27.314 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.315 11:31:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.884 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:33.885 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:33.885 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:33.885 Found net devices under 0000:af:00.0: cvl_0_0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:33.885 Found net devices under 0000:af:00.1: cvl_0_1 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:15:33.885 00:15:33.885 --- 10.0.0.2 ping statistics --- 00:15:33.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.885 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:15:33.885 00:15:33.885 --- 10.0.0.1 ping statistics --- 00:15:33.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.885 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:33.885 only one NIC for nvmf test 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.885 rmmod nvme_tcp 00:15:33.885 rmmod nvme_fabrics 00:15:33.885 rmmod nvme_keyring 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.885 11:31:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.262 00:15:35.262 real 0m8.175s 00:15:35.262 user 0m1.627s 00:15:35.262 sys 0m4.520s 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.262 11:31:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:35.262 ************************************ 00:15:35.262 END TEST nvmf_target_multipath 00:15:35.262 ************************************ 00:15:35.262 11:31:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:35.262 11:31:09 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:35.262 11:31:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:35.262 11:31:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.262 11:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.262 ************************************ 00:15:35.262 START TEST nvmf_zcopy 00:15:35.262 ************************************ 00:15:35.262 11:31:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:35.262 * Looking for test storage... 00:15:35.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.262 11:31:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.262 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.522 11:31:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.792 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:40.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:40.793 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:40.793 Found net devices under 0000:af:00.0: cvl_0_0 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:40.793 Found net devices under 0000:af:00.1: cvl_0_1 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.793 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:41.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:15:41.051 00:15:41.051 --- 10.0.0.2 ping statistics --- 00:15:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.051 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:41.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:15:41.051 00:15:41.051 --- 10.0.0.1 ping statistics --- 00:15:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.051 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.051 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2758506 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2758506 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2758506 ']' 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.310 11:31:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.310 [2024-07-15 11:31:15.574605] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:41.310 [2024-07-15 11:31:15.574661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.310 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.310 [2024-07-15 11:31:15.662369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.310 [2024-07-15 11:31:15.764284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.310 [2024-07-15 11:31:15.764338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.310 [2024-07-15 11:31:15.764351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.310 [2024-07-15 11:31:15.764362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.310 [2024-07-15 11:31:15.764372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.310 [2024-07-15 11:31:15.764401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 [2024-07-15 11:31:16.489591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 [2024-07-15 11:31:16.509765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 malloc0 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:42.246 { 00:15:42.246 "params": { 00:15:42.246 "name": "Nvme$subsystem", 00:15:42.246 "trtype": "$TEST_TRANSPORT", 00:15:42.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.246 "adrfam": "ipv4", 00:15:42.246 "trsvcid": "$NVMF_PORT", 00:15:42.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.246 "hdgst": ${hdgst:-false}, 00:15:42.246 "ddgst": ${ddgst:-false} 00:15:42.246 }, 00:15:42.246 "method": "bdev_nvme_attach_controller" 00:15:42.246 } 00:15:42.246 EOF 00:15:42.246 )") 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:42.246 11:31:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:42.246 "params": { 00:15:42.246 "name": "Nvme1", 00:15:42.246 "trtype": "tcp", 00:15:42.246 "traddr": "10.0.0.2", 00:15:42.246 "adrfam": "ipv4", 00:15:42.246 "trsvcid": "4420", 00:15:42.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.246 "hdgst": false, 00:15:42.246 "ddgst": false 00:15:42.246 }, 00:15:42.246 "method": "bdev_nvme_attach_controller" 00:15:42.246 }' 00:15:42.246 [2024-07-15 11:31:16.598297] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:42.246 [2024-07-15 11:31:16.598356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758584 ] 00:15:42.246 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.247 [2024-07-15 11:31:16.680163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.505 [2024-07-15 11:31:16.769646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.764 Running I/O for 10 seconds... 00:15:52.738 00:15:52.738 Latency(us) 00:15:52.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.738 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:52.738 Verification LBA range: start 0x0 length 0x1000 00:15:52.738 Nvme1n1 : 10.02 4511.81 35.25 0.00 0.00 28279.72 4617.31 36700.16 00:15:52.738 =================================================================================================================== 00:15:52.738 Total : 4511.81 35.25 0.00 0.00 28279.72 4617.31 36700.16 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2760618 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.997 { 00:15:52.997 "params": { 00:15:52.997 "name": "Nvme$subsystem", 00:15:52.997 "trtype": "$TEST_TRANSPORT", 00:15:52.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.997 "adrfam": "ipv4", 00:15:52.997 "trsvcid": "$NVMF_PORT", 00:15:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.997 "hdgst": ${hdgst:-false}, 00:15:52.997 "ddgst": ${ddgst:-false} 00:15:52.997 }, 00:15:52.997 "method": "bdev_nvme_attach_controller" 00:15:52.997 } 00:15:52.997 EOF 00:15:52.997 )") 00:15:52.997 [2024-07-15 11:31:27.343543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.343587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:52.997 11:31:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.997 "params": { 00:15:52.997 "name": "Nvme1", 00:15:52.997 "trtype": "tcp", 00:15:52.997 "traddr": "10.0.0.2", 00:15:52.997 "adrfam": "ipv4", 00:15:52.997 "trsvcid": "4420", 00:15:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.997 "hdgst": false, 00:15:52.997 "ddgst": false 00:15:52.997 }, 00:15:52.997 "method": "bdev_nvme_attach_controller" 00:15:52.997 }' 00:15:52.997 [2024-07-15 11:31:27.355538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.355558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.363560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.363578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.375598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.375615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.387632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.387649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.389216] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:15:52.997 [2024-07-15 11:31:27.389285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760618 ] 00:15:52.997 [2024-07-15 11:31:27.399667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.399686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.411698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.411716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.997 [2024-07-15 11:31:27.423735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.423753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.435770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.435787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.447805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.447822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.997 [2024-07-15 11:31:27.459840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.997 [2024-07-15 11:31:27.459858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.469856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.257 [2024-07-15 11:31:27.471873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.471892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.483912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.483932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.495947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.495964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.507982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.507999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.520026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.520053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.532054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.532071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.544092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.544110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.556126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.556145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.556260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.257 [2024-07-15 11:31:27.568168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.568191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.580202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.580225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.592231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.592250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.604274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.604292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.616307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.616326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.628351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.628376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.640368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.640385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.652434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.652465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.664450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.664473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.676488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.676511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.688528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.688551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.700557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.700580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.257 [2024-07-15 11:31:27.712599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.257 [2024-07-15 11:31:27.712628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 Running I/O for 5 seconds... 00:15:53.516 [2024-07-15 11:31:27.724623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.724642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.746012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.746042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.764839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.764869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.781883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.781913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.800706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.800735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.817615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.817645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.836523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.516 [2024-07-15 11:31:27.836552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.516 [2024-07-15 11:31:27.854526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.854555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.872173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.872202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.891348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.891382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.910500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.910529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.928612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.928641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.946709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.946739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.517 [2024-07-15 11:31:27.965590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.517 [2024-07-15 11:31:27.965618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:27.982136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:27.982166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.001031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.001061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.019097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.019127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.037015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.037044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.054687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.054716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.073459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.073487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.092416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.092445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.109158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.109187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.128276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.128304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.147099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.147128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.165335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.165363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.183707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.183736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.202398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.202427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.221534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.221563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.776 [2024-07-15 11:31:28.239694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.776 [2024-07-15 11:31:28.239729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.035 [2024-07-15 11:31:28.259451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.035 [2024-07-15 11:31:28.259481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.035 [2024-07-15 11:31:28.277665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.277694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.295573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.295602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.313580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.313609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.331449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.331478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.348146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.348177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.366077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.366107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.385130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.385160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.403921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.403949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.422775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.422804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.441856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.441885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.460164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.460193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.478292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.478321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.036 [2024-07-15 11:31:28.496331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.036 [2024-07-15 11:31:28.496360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.515247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.515285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.533171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.533198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.551211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.551239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.569115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.569145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.587055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.587090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.606175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.606204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.624613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.624642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.642873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.642902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.661783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.661812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.295 [2024-07-15 11:31:28.679640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.295 [2024-07-15 11:31:28.679669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.296 [2024-07-15 11:31:28.697343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.296 [2024-07-15 11:31:28.697372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.296 [2024-07-15 11:31:28.716231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.296 [2024-07-15 11:31:28.716266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.296 [2024-07-15 11:31:28.734335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.296 [2024-07-15 11:31:28.734364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.296 [2024-07-15 11:31:28.752955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.296 [2024-07-15 11:31:28.752984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.769783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.769812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.782813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.782842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.797590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.797619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.811873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.811901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.829127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.829155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.845729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.845757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.863918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.863947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.882544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.882573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.900381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.900410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.918117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.918151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.936807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.936836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.955185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.955213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.974263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.974292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:28.992222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:28.992250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.555 [2024-07-15 11:31:29.011116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.555 [2024-07-15 11:31:29.011145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.029000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.029031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.047732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.047762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.065929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.065958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.085321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.085350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.104602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.104631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.121524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.121553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.134124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.134152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.148051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.148079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.162379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.814 [2024-07-15 11:31:29.162407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.814 [2024-07-15 11:31:29.179555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.179584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.815 [2024-07-15 11:31:29.197609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.197638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.815 [2024-07-15 11:31:29.215390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.215418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.815 [2024-07-15 11:31:29.234459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.234487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.815 [2024-07-15 11:31:29.252756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.252785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.815 [2024-07-15 11:31:29.271659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.815 [2024-07-15 11:31:29.271688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.290656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.290686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.308832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.308861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.327762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.327790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.346900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.346928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.366153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.366182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.384125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.384156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.402269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.402298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.420346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.420374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.439797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.439825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.459030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.459058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.478087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.478117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.498020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.498049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.516343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.516372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.074 [2024-07-15 11:31:29.533037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.074 [2024-07-15 11:31:29.533066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.551051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.551080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.570360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.570389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.588241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.588278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.607457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.607486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.625613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.625643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.644736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.644766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.662865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.662893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.680803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.680833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.699024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.699054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.718437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.718467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.735396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.735428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.747644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.747673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.761499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.761529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.334 [2024-07-15 11:31:29.779996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.334 [2024-07-15 11:31:29.780026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.799167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.799197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.815744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.815773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.833921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.833949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.851557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.851586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.870659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.870688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.887435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.887464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.900279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.900309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.915422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.915451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.932615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.932644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.950879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.950909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.969098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.969126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:29.988109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:29.988138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:30.005038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:30.005068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:30.017163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:30.017193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:30.031342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:30.031370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.593 [2024-07-15 11:31:30.049449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.593 [2024-07-15 11:31:30.049480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.067592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.067623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.085283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.085313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.104208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.104238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.123021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.123051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.141378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.141408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.159610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.159640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.177367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.177395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.194803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.194833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.213553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.213582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.231540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.231569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.249528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.249558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.267762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.267791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.286349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.286378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.853 [2024-07-15 11:31:30.304521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.853 [2024-07-15 11:31:30.304559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.323379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.323409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.341505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.341534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.360676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.360705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.378488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.378517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.396178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.396206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.413756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.413786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.431953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.431982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.450941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.450970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.468185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.468214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.486250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.486291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.504621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.504650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.523593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.523622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.541812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.541841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.112 [2024-07-15 11:31:30.560190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.112 [2024-07-15 11:31:30.560220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.579636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.579666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.597637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.597671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.616084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.616112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.632754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.632782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.651556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.651587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.669548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.669577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.688210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.688240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.707020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.707048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.726005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.726035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.745092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.745124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.761545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.761576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.780509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.780538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.798275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.798304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.817052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.817081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.372 [2024-07-15 11:31:30.836119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.372 [2024-07-15 11:31:30.836148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.630 [2024-07-15 11:31:30.854032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.630 [2024-07-15 11:31:30.854062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.630 [2024-07-15 11:31:30.872777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.630 [2024-07-15 11:31:30.872806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.630 [2024-07-15 11:31:30.892037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.892065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:30.911410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.911439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:30.929427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.929456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:30.948011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.948045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:30.967157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.967186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:30.984797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:30.984825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:31.003751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:31.003780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:31.021466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:31.021495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:31.039501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:31.039530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:31.058570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:31.058600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.631 [2024-07-15 11:31:31.077760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.631 [2024-07-15 11:31:31.077789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.096058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.096089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.115253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.115290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.131958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.131987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.151050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.151080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.167621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.167651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.186452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.186482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.203133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.203164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.222244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.222281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.240122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.240151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.256739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.256769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.276620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.276649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.295713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.295748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.313914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.313943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.331826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.331855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.890 [2024-07-15 11:31:31.350612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.890 [2024-07-15 11:31:31.350641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.368695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.368724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.387803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.387833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.406636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.406666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.425049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.425077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.444079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.444108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.462153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.462182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.480308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.480337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.499486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.499516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.517994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.518023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.536027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.536056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.553869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.553899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.149 [2024-07-15 11:31:31.571985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.149 [2024-07-15 11:31:31.572014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.150 [2024-07-15 11:31:31.590100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.150 [2024-07-15 11:31:31.590128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.150 [2024-07-15 11:31:31.608097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.150 [2024-07-15 11:31:31.608127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.625836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.625866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.644687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.644722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.663836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.663865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.681959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.681989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.699816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.699845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.717524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.717553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.735097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.735125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.753041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.753070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.770856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.770885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.787635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.787664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.805376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.805405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.824193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.824222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.840856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.840886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.408 [2024-07-15 11:31:31.859632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.408 [2024-07-15 11:31:31.859660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.876523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.876553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.895178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.895207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.913485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.913514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.932431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.932459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.951483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.951512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.970572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.970602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:31.989357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:31.989386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.006253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.006289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.018849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.018877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.032688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.032717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.051019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.051047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.069233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.069271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.087883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.087913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.105937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.105965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.667 [2024-07-15 11:31:32.123716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.667 [2024-07-15 11:31:32.123743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.143042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.143071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.161399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.161427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.180623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.180653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.198660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.198689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.217392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.217421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.236037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.236066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.255111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.255139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.273170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.273199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.292243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.292281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.310148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.310177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.326989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.327019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.345812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.345841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.364928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.364956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.927 [2024-07-15 11:31:32.384063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.927 [2024-07-15 11:31:32.384091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.403549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.403580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.420146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.420175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.439078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.439107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.455948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.455976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.468803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.468831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.481512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.481541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.496017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.496045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.512791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.512821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.532422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.532451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.551447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.551476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.186 [2024-07-15 11:31:32.570480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.186 [2024-07-15 11:31:32.570509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.187 [2024-07-15 11:31:32.589717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.187 [2024-07-15 11:31:32.589747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.187 [2024-07-15 11:31:32.607748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.187 [2024-07-15 11:31:32.607777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.187 [2024-07-15 11:31:32.626592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.187 [2024-07-15 11:31:32.626620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.187 [2024-07-15 11:31:32.644549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.187 [2024-07-15 11:31:32.644578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.663883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.663914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.682167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.682197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.701403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.701433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.719205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.719233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.738304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.738334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 00:15:58.446 Latency(us) 00:15:58.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.446 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:58.446 Nvme1n1 : 5.01 8850.41 69.14 0.00 0.00 14444.72 6166.34 30980.65 00:15:58.446 =================================================================================================================== 00:15:58.446 Total : 8850.41 69.14 0.00 0.00 14444.72 6166.34 30980.65 00:15:58.446 [2024-07-15 11:31:32.750848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.750876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.762882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.762907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.774909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.774928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.786955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.786981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.798980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.799001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.811010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.811030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.823048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.823069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.835081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.835101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.847111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.847130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.859147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.859166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.871180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.871204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.883220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.883240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.895261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.895281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.446 [2024-07-15 11:31:32.907295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.446 [2024-07-15 11:31:32.907312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.705 [2024-07-15 11:31:32.919333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.705 [2024-07-15 11:31:32.919353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.705 [2024-07-15 11:31:32.931360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.705 [2024-07-15 11:31:32.931378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.705 [2024-07-15 11:31:32.943393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.705 [2024-07-15 11:31:32.943411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2760618) - No such process 00:15:58.705 11:31:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2760618 00:15:58.705 11:31:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.705 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.705 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.705 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.706 delay0 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.706 11:31:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:58.706 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.706 [2024-07-15 11:31:33.121426] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:05.289 Initializing NVMe Controllers 00:16:05.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:05.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:05.289 Initialization complete. Launching workers. 00:16:05.289 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 84 00:16:05.289 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 371, failed to submit 33 00:16:05.289 success 201, unsuccess 170, failed 0 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.289 rmmod nvme_tcp 00:16:05.289 rmmod nvme_fabrics 00:16:05.289 rmmod nvme_keyring 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2758506 ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2758506 ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2758506' 00:16:05.289 killing process with pid 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2758506 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.289 11:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.297 11:31:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.297 00:16:07.297 real 0m32.107s 00:16:07.297 user 0m44.165s 00:16:07.297 sys 0m9.733s 00:16:07.297 11:31:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.297 11:31:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.297 ************************************ 00:16:07.297 END TEST nvmf_zcopy 00:16:07.297 ************************************ 00:16:07.556 11:31:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:07.556 11:31:41 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:07.556 11:31:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.556 11:31:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.556 11:31:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.556 ************************************ 00:16:07.556 START TEST nvmf_nmic 00:16:07.556 ************************************ 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:07.556 * Looking for test storage... 00:16:07.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.556 11:31:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.557 11:31:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:14.141 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:14.141 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:14.141 Found net devices under 0000:af:00.0: cvl_0_0 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:14.141 Found net devices under 0000:af:00.1: cvl_0_1 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.141 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:16:14.142 00:16:14.142 --- 10.0.0.2 ping statistics --- 00:16:14.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.142 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:16:14.142 00:16:14.142 --- 10.0.0.1 ping statistics --- 00:16:14.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.142 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2766378 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2766378 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2766378 ']' 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.142 11:31:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.142 [2024-07-15 11:31:47.956735] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:16:14.142 [2024-07-15 11:31:47.956791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.142 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.142 [2024-07-15 11:31:48.042523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.142 [2024-07-15 11:31:48.134532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.142 [2024-07-15 11:31:48.134577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.142 [2024-07-15 11:31:48.134587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.142 [2024-07-15 11:31:48.134596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.142 [2024-07-15 11:31:48.134603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.142 [2024-07-15 11:31:48.134660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.142 [2024-07-15 11:31:48.134794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.142 [2024-07-15 11:31:48.134906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.142 [2024-07-15 11:31:48.134906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 [2024-07-15 11:31:48.948239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.723 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 Malloc0 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 [2024-07-15 11:31:49.008337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:14.724 test case1: single bdev can't be used in multiple subsystems 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 [2024-07-15 11:31:49.032227] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:14.724 [2024-07-15 11:31:49.032259] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:14.724 [2024-07-15 11:31:49.032271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.724 request: 00:16:14.724 { 00:16:14.724 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:14.724 "namespace": { 00:16:14.724 "bdev_name": "Malloc0", 00:16:14.724 "no_auto_visible": false 00:16:14.724 }, 00:16:14.724 "method": "nvmf_subsystem_add_ns", 00:16:14.724 "req_id": 1 00:16:14.724 } 00:16:14.724 Got JSON-RPC error response 00:16:14.724 response: 00:16:14.724 { 00:16:14.724 "code": -32602, 00:16:14.724 "message": "Invalid parameters" 00:16:14.724 } 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:14.724 Adding namespace failed - expected result. 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:14.724 test case2: host connect to nvmf target in multiple paths 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.724 [2024-07-15 11:31:49.044412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.724 11:31:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.100 11:31:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:17.481 11:31:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.481 11:31:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.481 11:31:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.481 11:31:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.481 11:31:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:19.391 11:31:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:19.391 [global] 00:16:19.391 thread=1 00:16:19.391 invalidate=1 00:16:19.391 rw=write 00:16:19.391 time_based=1 00:16:19.391 runtime=1 00:16:19.391 ioengine=libaio 00:16:19.391 direct=1 00:16:19.391 bs=4096 00:16:19.391 iodepth=1 00:16:19.391 norandommap=0 00:16:19.391 numjobs=1 00:16:19.391 00:16:19.391 verify_dump=1 00:16:19.391 verify_backlog=512 00:16:19.391 verify_state_save=0 00:16:19.391 do_verify=1 00:16:19.391 verify=crc32c-intel 00:16:19.391 [job0] 00:16:19.391 filename=/dev/nvme0n1 00:16:19.391 Could not set queue depth (nvme0n1) 00:16:19.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.650 fio-3.35 00:16:19.650 Starting 1 thread 00:16:21.025 00:16:21.025 job0: (groupid=0, jobs=1): err= 0: pid=2767659: Mon Jul 15 11:31:55 2024 00:16:21.025 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:16:21.025 slat (nsec): min=10307, max=23411, avg=21906.62, stdev=2730.93 00:16:21.025 clat (usec): min=40932, max=41944, avg=41134.66, stdev=353.72 00:16:21.025 lat (usec): min=40954, max=41966, avg=41156.56, stdev=353.17 00:16:21.025 clat percentiles (usec): 00:16:21.025 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:21.025 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:21.025 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:16:21.025 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:21.025 | 99.99th=[42206] 00:16:21.025 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:21.025 slat (nsec): min=10227, max=45627, avg=11884.76, stdev=2921.70 00:16:21.025 clat (usec): min=182, max=486, avg=314.47, stdev=55.97 00:16:21.025 lat (usec): min=193, max=526, avg=326.36, stdev=56.28 00:16:21.025 clat percentiles (usec): 00:16:21.025 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 255], 00:16:21.025 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:16:21.026 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 355], 95.00th=[ 359], 00:16:21.026 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 486], 99.95th=[ 486], 00:16:21.026 | 99.99th=[ 486] 00:16:21.026 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:21.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:21.026 lat (usec) : 250=18.57%, 500=77.49% 00:16:21.026 lat (msec) : 50=3.94% 00:16:21.026 cpu : usr=0.58%, sys=0.68%, ctx=533, majf=0, minf=2 00:16:21.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.026 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.026 00:16:21.026 Run status group 0 (all jobs): 00:16:21.026 READ: bw=81.3KiB/s (83.3kB/s), 81.3KiB/s-81.3KiB/s (83.3kB/s-83.3kB/s), io=84.0KiB (86.0kB), run=1033-1033msec 00:16:21.026 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:16:21.026 00:16:21.026 Disk stats (read/write): 00:16:21.026 nvme0n1: ios=67/512, merge=0/0, ticks=744/151, in_queue=895, util=92.48% 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.026 rmmod nvme_tcp 00:16:21.026 rmmod nvme_fabrics 00:16:21.026 rmmod nvme_keyring 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2766378 ']' 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2766378 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2766378 ']' 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2766378 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.026 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2766378 00:16:21.284 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.284 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.284 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2766378' 00:16:21.284 killing process with pid 2766378 00:16:21.284 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2766378 00:16:21.284 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2766378 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.543 11:31:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.446 11:31:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.446 00:16:23.446 real 0m16.024s 00:16:23.446 user 0m42.698s 00:16:23.446 sys 0m5.348s 00:16:23.446 11:31:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.446 11:31:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:23.446 ************************************ 00:16:23.446 END TEST nvmf_nmic 00:16:23.446 ************************************ 00:16:23.446 11:31:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.446 11:31:57 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:23.446 11:31:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.446 11:31:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.446 11:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.446 ************************************ 00:16:23.446 START TEST nvmf_fio_target 00:16:23.446 ************************************ 00:16:23.446 11:31:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:23.706 * Looking for test storage... 00:16:23.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.706 11:31:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.271 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.271 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.271 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:30.272 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:30.272 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:30.272 Found net devices under 0000:af:00.0: cvl_0_0 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:30.272 Found net devices under 0000:af:00.1: cvl_0_1 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:16:30.272 00:16:30.272 --- 10.0.0.2 ping statistics --- 00:16:30.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.272 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:30.272 00:16:30.272 --- 10.0.0.1 ping statistics --- 00:16:30.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.272 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2771393 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2771393 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2771393 ']' 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.272 11:32:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.272 [2024-07-15 11:32:03.917876] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:16:30.272 [2024-07-15 11:32:03.917932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.272 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.272 [2024-07-15 11:32:04.007100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.272 [2024-07-15 11:32:04.094922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.273 [2024-07-15 11:32:04.094968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.273 [2024-07-15 11:32:04.094979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.273 [2024-07-15 11:32:04.094987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.273 [2024-07-15 11:32:04.094995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.273 [2024-07-15 11:32:04.095106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.273 [2024-07-15 11:32:04.095231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.273 [2024-07-15 11:32:04.095374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.273 [2024-07-15 11:32:04.095375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.529 11:32:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.786 [2024-07-15 11:32:05.054004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.786 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.042 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:31.042 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.042 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:31.042 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.299 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:31.299 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.556 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:31.556 11:32:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:31.813 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.071 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:32.071 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.328 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:32.328 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.586 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:32.586 11:32:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:32.844 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:33.102 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:33.102 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.102 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:33.102 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.359 11:32:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.616 [2024-07-15 11:32:08.009221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.616 11:32:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:33.874 11:32:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:34.132 11:32:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:35.528 11:32:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:37.430 11:32:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:37.430 [global] 00:16:37.430 thread=1 00:16:37.430 invalidate=1 00:16:37.430 rw=write 00:16:37.430 time_based=1 00:16:37.430 runtime=1 00:16:37.430 ioengine=libaio 00:16:37.430 direct=1 00:16:37.430 bs=4096 00:16:37.430 iodepth=1 00:16:37.430 norandommap=0 00:16:37.430 numjobs=1 00:16:37.430 00:16:37.430 verify_dump=1 00:16:37.430 verify_backlog=512 00:16:37.430 verify_state_save=0 00:16:37.430 do_verify=1 00:16:37.430 verify=crc32c-intel 00:16:37.430 [job0] 00:16:37.430 filename=/dev/nvme0n1 00:16:37.430 [job1] 00:16:37.430 filename=/dev/nvme0n2 00:16:37.430 [job2] 00:16:37.430 filename=/dev/nvme0n3 00:16:37.430 [job3] 00:16:37.430 filename=/dev/nvme0n4 00:16:37.717 Could not set queue depth (nvme0n1) 00:16:37.717 Could not set queue depth (nvme0n2) 00:16:37.717 Could not set queue depth (nvme0n3) 00:16:37.717 Could not set queue depth (nvme0n4) 00:16:37.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.984 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.984 fio-3.35 00:16:37.984 Starting 4 threads 00:16:39.388 00:16:39.388 job0: (groupid=0, jobs=1): err= 0: pid=2773174: Mon Jul 15 11:32:13 2024 00:16:39.388 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:16:39.388 slat (nsec): min=9433, max=23836, avg=20168.29, stdev=5424.29 00:16:39.388 clat (usec): min=40779, max=42057, avg=41121.45, stdev=372.31 00:16:39.388 lat (usec): min=40802, max=42066, avg=41141.62, stdev=371.06 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:39.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:39.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:39.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:39.388 | 99.99th=[42206] 00:16:39.388 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:16:39.388 slat (nsec): min=9784, max=46866, avg=11563.92, stdev=2238.61 00:16:39.388 clat (usec): min=177, max=589, avg=310.29, stdev=55.65 00:16:39.388 lat (usec): min=188, max=604, avg=321.85, stdev=55.87 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 190], 5.00th=[ 223], 10.00th=[ 265], 20.00th=[ 277], 00:16:39.388 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:16:39.388 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 388], 00:16:39.388 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 586], 00:16:39.388 | 99.99th=[ 586] 00:16:39.388 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:16:39.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:39.388 lat (usec) : 250=6.75%, 500=87.24%, 750=2.06% 00:16:39.388 lat (msec) : 50=3.94% 00:16:39.388 cpu : usr=0.19%, sys=0.58%, ctx=534, majf=0, minf=1 00:16:39.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.388 job1: (groupid=0, jobs=1): err= 0: pid=2773175: Mon Jul 15 11:32:13 2024 00:16:39.388 read: IOPS=522, BW=2089KiB/s (2140kB/s)(2104KiB/1007msec) 00:16:39.388 slat (nsec): min=6180, max=29522, avg=8092.54, stdev=2603.59 00:16:39.388 clat (usec): min=253, max=41974, avg=1382.66, stdev=6370.93 00:16:39.388 lat (usec): min=260, max=41996, avg=1390.75, stdev=6372.96 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 302], 00:16:39.388 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:16:39.388 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 400], 95.00th=[ 490], 00:16:39.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:39.388 | 99.99th=[42206] 00:16:39.388 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:16:39.388 slat (nsec): min=9015, max=38469, avg=11199.55, stdev=2150.61 00:16:39.388 clat (usec): min=171, max=559, avg=253.49, stdev=61.19 00:16:39.388 lat (usec): min=181, max=573, avg=264.69, stdev=62.11 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 210], 00:16:39.388 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 249], 00:16:39.388 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 351], 95.00th=[ 379], 00:16:39.388 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 562], 00:16:39.388 | 99.99th=[ 562] 00:16:39.388 bw ( KiB/s): min= 8192, max= 8192, per=82.40%, avg=8192.00, stdev= 0.00, samples=1 00:16:39.388 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:39.388 lat (usec) : 250=40.19%, 500=57.81%, 750=0.90% 00:16:39.388 lat (msec) : 2=0.19%, 20=0.06%, 50=0.84% 00:16:39.388 cpu : usr=0.70%, sys=1.79%, ctx=1550, majf=0, minf=1 00:16:39.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.388 job2: (groupid=0, jobs=1): err= 0: pid=2773176: Mon Jul 15 11:32:13 2024 00:16:39.388 read: IOPS=317, BW=1271KiB/s (1301kB/s)(1272KiB/1001msec) 00:16:39.388 slat (nsec): min=6852, max=35238, avg=8933.28, stdev=4606.06 00:16:39.388 clat (usec): min=297, max=41869, avg=2671.88, stdev=9411.93 00:16:39.388 lat (usec): min=305, max=41877, avg=2680.81, stdev=9414.81 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:16:39.388 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:16:39.388 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 529], 95.00th=[41157], 00:16:39.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:39.388 | 99.99th=[41681] 00:16:39.388 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:39.388 slat (usec): min=9, max=2239, avg=15.44, stdev=98.50 00:16:39.388 clat (usec): min=191, max=786, avg=270.42, stdev=41.96 00:16:39.388 lat (usec): min=203, max=2605, avg=285.86, stdev=111.18 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 231], 00:16:39.388 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:16:39.388 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:16:39.388 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 791], 99.95th=[ 791], 00:16:39.388 | 99.99th=[ 791] 00:16:39.388 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:16:39.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:39.388 lat (usec) : 250=21.57%, 500=73.73%, 750=2.41%, 1000=0.12% 00:16:39.388 lat (msec) : 50=2.17% 00:16:39.388 cpu : usr=0.50%, sys=0.80%, ctx=832, majf=0, minf=2 00:16:39.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 issued rwts: total=318,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.388 job3: (groupid=0, jobs=1): err= 0: pid=2773177: Mon Jul 15 11:32:13 2024 00:16:39.388 read: IOPS=69, BW=279KiB/s (286kB/s)(280KiB/1002msec) 00:16:39.388 slat (nsec): min=8130, max=25576, avg=12741.37, stdev=5665.41 00:16:39.388 clat (usec): min=390, max=41585, avg=11486.68, stdev=18188.27 00:16:39.388 lat (usec): min=398, max=41608, avg=11499.42, stdev=18191.63 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:16:39.388 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 465], 00:16:39.388 | 70.00th=[ 494], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:39.388 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:39.388 | 99.99th=[41681] 00:16:39.388 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:39.388 slat (nsec): min=10320, max=39753, avg=13104.61, stdev=2026.65 00:16:39.388 clat (usec): min=269, max=2554, avg=368.24, stdev=149.84 00:16:39.388 lat (usec): min=280, max=2566, avg=381.35, stdev=150.07 00:16:39.388 clat percentiles (usec): 00:16:39.388 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:16:39.388 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:16:39.388 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 420], 95.00th=[ 553], 00:16:39.388 | 99.00th=[ 627], 99.50th=[ 1663], 99.90th=[ 2540], 99.95th=[ 2540], 00:16:39.388 | 99.99th=[ 2540] 00:16:39.388 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:16:39.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:39.388 lat (usec) : 500=90.21%, 750=5.67% 00:16:39.388 lat (msec) : 2=0.69%, 4=0.17%, 50=3.26% 00:16:39.388 cpu : usr=0.40%, sys=0.70%, ctx=583, majf=0, minf=1 00:16:39.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.388 issued rwts: total=70,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.389 00:16:39.389 Run status group 0 (all jobs): 00:16:39.389 READ: bw=3631KiB/s (3718kB/s), 81.6KiB/s-2089KiB/s (83.5kB/s-2140kB/s), io=3740KiB (3830kB), run=1001-1030msec 00:16:39.389 WRITE: bw=9942KiB/s (10.2MB/s), 1988KiB/s-4068KiB/s (2036kB/s-4165kB/s), io=10.0MiB (10.5MB), run=1001-1030msec 00:16:39.389 00:16:39.389 Disk stats (read/write): 00:16:39.389 nvme0n1: ios=68/512, merge=0/0, ticks=1284/155, in_queue=1439, util=95.79% 00:16:39.389 nvme0n2: ios=536/1024, merge=0/0, ticks=562/249, in_queue=811, util=85.26% 00:16:39.389 nvme0n3: ios=370/512, merge=0/0, ticks=862/137, in_queue=999, util=96.59% 00:16:39.389 nvme0n4: ios=122/512, merge=0/0, ticks=1109/182, in_queue=1291, util=96.12% 00:16:39.389 11:32:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:39.389 [global] 00:16:39.389 thread=1 00:16:39.389 invalidate=1 00:16:39.389 rw=randwrite 00:16:39.389 time_based=1 00:16:39.389 runtime=1 00:16:39.389 ioengine=libaio 00:16:39.389 direct=1 00:16:39.389 bs=4096 00:16:39.389 iodepth=1 00:16:39.389 norandommap=0 00:16:39.389 numjobs=1 00:16:39.389 00:16:39.389 verify_dump=1 00:16:39.389 verify_backlog=512 00:16:39.389 verify_state_save=0 00:16:39.389 do_verify=1 00:16:39.389 verify=crc32c-intel 00:16:39.389 [job0] 00:16:39.389 filename=/dev/nvme0n1 00:16:39.389 [job1] 00:16:39.389 filename=/dev/nvme0n2 00:16:39.389 [job2] 00:16:39.389 filename=/dev/nvme0n3 00:16:39.389 [job3] 00:16:39.389 filename=/dev/nvme0n4 00:16:39.389 Could not set queue depth (nvme0n1) 00:16:39.389 Could not set queue depth (nvme0n2) 00:16:39.389 Could not set queue depth (nvme0n3) 00:16:39.389 Could not set queue depth (nvme0n4) 00:16:39.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.651 fio-3.35 00:16:39.651 Starting 4 threads 00:16:41.054 00:16:41.054 job0: (groupid=0, jobs=1): err= 0: pid=2773599: Mon Jul 15 11:32:15 2024 00:16:41.054 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:16:41.054 slat (nsec): min=8785, max=23173, avg=15317.24, stdev=5742.79 00:16:41.054 clat (usec): min=40805, max=41999, avg=41071.05, stdev=306.90 00:16:41.054 lat (usec): min=40827, max=42009, avg=41086.36, stdev=306.67 00:16:41.054 clat percentiles (usec): 00:16:41.054 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:16:41.054 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:41.054 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:41.054 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:41.054 | 99.99th=[42206] 00:16:41.054 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:16:41.054 slat (nsec): min=5816, max=56080, avg=9767.52, stdev=4216.85 00:16:41.054 clat (usec): min=235, max=691, avg=304.26, stdev=38.02 00:16:41.054 lat (usec): min=248, max=697, avg=314.02, stdev=38.35 00:16:41.054 clat percentiles (usec): 00:16:41.054 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 269], 20.00th=[ 281], 00:16:41.054 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:16:41.054 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:16:41.054 | 99.00th=[ 441], 99.50th=[ 537], 99.90th=[ 693], 99.95th=[ 693], 00:16:41.054 | 99.99th=[ 693] 00:16:41.054 bw ( KiB/s): min= 4096, max= 4096, per=28.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:41.054 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:41.054 lat (usec) : 250=6.75%, 500=88.74%, 750=0.56% 00:16:41.054 lat (msec) : 50=3.94% 00:16:41.054 cpu : usr=0.29%, sys=0.29%, ctx=534, majf=0, minf=1 00:16:41.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.054 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.054 job1: (groupid=0, jobs=1): err= 0: pid=2773600: Mon Jul 15 11:32:15 2024 00:16:41.054 read: IOPS=20, BW=80.7KiB/s (82.6kB/s)(84.0KiB/1041msec) 00:16:41.055 slat (nsec): min=9355, max=24266, avg=20260.86, stdev=4884.58 00:16:41.055 clat (usec): min=40861, max=41516, avg=41003.11, stdev=130.76 00:16:41.055 lat (usec): min=40886, max=41526, avg=41023.37, stdev=128.69 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:41.055 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:41.055 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:41.055 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:41.055 | 99.99th=[41681] 00:16:41.055 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:16:41.055 slat (nsec): min=10265, max=38493, avg=11751.36, stdev=1919.01 00:16:41.055 clat (usec): min=283, max=436, avg=335.25, stdev=23.42 00:16:41.055 lat (usec): min=294, max=475, avg=347.01, stdev=23.84 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:16:41.055 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:16:41.055 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:16:41.055 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 437], 99.95th=[ 437], 00:16:41.055 | 99.99th=[ 437] 00:16:41.055 bw ( KiB/s): min= 4096, max= 4096, per=28.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:41.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:41.055 lat (usec) : 500=96.06% 00:16:41.055 lat (msec) : 50=3.94% 00:16:41.055 cpu : usr=0.19%, sys=1.15%, ctx=534, majf=0, minf=1 00:16:41.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.055 job2: (groupid=0, jobs=1): err= 0: pid=2773603: Mon Jul 15 11:32:15 2024 00:16:41.055 read: IOPS=894, BW=3580KiB/s (3665kB/s)(3712KiB/1037msec) 00:16:41.055 slat (nsec): min=7367, max=25726, avg=8412.01, stdev=1945.60 00:16:41.055 clat (usec): min=260, max=41232, avg=816.07, stdev=4403.47 00:16:41.055 lat (usec): min=268, max=41244, avg=824.49, stdev=4404.83 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:16:41.055 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:16:41.055 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 502], 00:16:41.055 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:41.055 | 99.99th=[41157] 00:16:41.055 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:16:41.055 slat (nsec): min=10612, max=41337, avg=12318.95, stdev=2267.01 00:16:41.055 clat (usec): min=173, max=608, avg=246.67, stdev=39.97 00:16:41.055 lat (usec): min=185, max=621, avg=258.99, stdev=40.20 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:16:41.055 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 247], 00:16:41.055 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 314], 00:16:41.055 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 553], 99.95th=[ 611], 00:16:41.055 | 99.99th=[ 611] 00:16:41.055 bw ( KiB/s): min= 8192, max= 8192, per=57.22%, avg=8192.00, stdev= 0.00, samples=1 00:16:41.055 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:41.055 lat (usec) : 250=32.84%, 500=64.45%, 750=2.10% 00:16:41.055 lat (msec) : 4=0.05%, 50=0.56% 00:16:41.055 cpu : usr=1.45%, sys=3.28%, ctx=1954, majf=0, minf=1 00:16:41.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 issued rwts: total=928,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.055 job3: (groupid=0, jobs=1): err= 0: pid=2773604: Mon Jul 15 11:32:15 2024 00:16:41.055 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:41.055 slat (nsec): min=7338, max=57610, avg=8490.32, stdev=2175.90 00:16:41.055 clat (usec): min=256, max=1151, avg=354.26, stdev=56.52 00:16:41.055 lat (usec): min=264, max=1162, avg=362.75, stdev=56.83 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:16:41.055 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:16:41.055 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 420], 00:16:41.055 | 99.00th=[ 465], 99.50th=[ 515], 99.90th=[ 1106], 99.95th=[ 1156], 00:16:41.055 | 99.99th=[ 1156] 00:16:41.055 write: IOPS=1676, BW=6705KiB/s (6866kB/s)(6712KiB/1001msec); 0 zone resets 00:16:41.055 slat (nsec): min=10526, max=70387, avg=12645.29, stdev=4301.79 00:16:41.055 clat (usec): min=175, max=444, avg=245.20, stdev=31.82 00:16:41.055 lat (usec): min=187, max=458, avg=257.84, stdev=33.25 00:16:41.055 clat percentiles (usec): 00:16:41.055 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 223], 00:16:41.055 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:16:41.055 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:16:41.055 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 433], 99.95th=[ 445], 00:16:41.055 | 99.99th=[ 445] 00:16:41.055 bw ( KiB/s): min= 8192, max= 8192, per=57.22%, avg=8192.00, stdev= 0.00, samples=1 00:16:41.055 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:41.055 lat (usec) : 250=32.39%, 500=67.30%, 750=0.19% 00:16:41.055 lat (msec) : 2=0.12% 00:16:41.055 cpu : usr=2.90%, sys=5.10%, ctx=3216, majf=0, minf=2 00:16:41.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.055 issued rwts: total=1536,1678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.055 00:16:41.055 Run status group 0 (all jobs): 00:16:41.055 READ: bw=9629KiB/s (9860kB/s), 80.7KiB/s-6138KiB/s (82.6kB/s-6285kB/s), io=9.79MiB (10.3MB), run=1001-1041msec 00:16:41.055 WRITE: bw=14.0MiB/s (14.7MB/s), 1967KiB/s-6705KiB/s (2015kB/s-6866kB/s), io=14.6MiB (15.3MB), run=1001-1041msec 00:16:41.055 00:16:41.055 Disk stats (read/write): 00:16:41.055 nvme0n1: ios=46/512, merge=0/0, ticks=1647/151, in_queue=1798, util=96.09% 00:16:41.055 nvme0n2: ios=46/512, merge=0/0, ticks=790/168, in_queue=958, util=97.45% 00:16:41.055 nvme0n3: ios=947/1024, merge=0/0, ticks=1482/244, in_queue=1726, util=96.08% 00:16:41.055 nvme0n4: ios=1144/1536, merge=0/0, ticks=710/361, in_queue=1071, util=97.64% 00:16:41.055 11:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:41.055 [global] 00:16:41.055 thread=1 00:16:41.055 invalidate=1 00:16:41.055 rw=write 00:16:41.055 time_based=1 00:16:41.055 runtime=1 00:16:41.055 ioengine=libaio 00:16:41.055 direct=1 00:16:41.055 bs=4096 00:16:41.055 iodepth=128 00:16:41.055 norandommap=0 00:16:41.055 numjobs=1 00:16:41.055 00:16:41.055 verify_dump=1 00:16:41.055 verify_backlog=512 00:16:41.055 verify_state_save=0 00:16:41.055 do_verify=1 00:16:41.055 verify=crc32c-intel 00:16:41.055 [job0] 00:16:41.055 filename=/dev/nvme0n1 00:16:41.055 [job1] 00:16:41.055 filename=/dev/nvme0n2 00:16:41.055 [job2] 00:16:41.055 filename=/dev/nvme0n3 00:16:41.055 [job3] 00:16:41.055 filename=/dev/nvme0n4 00:16:41.055 Could not set queue depth (nvme0n1) 00:16:41.055 Could not set queue depth (nvme0n2) 00:16:41.055 Could not set queue depth (nvme0n3) 00:16:41.055 Could not set queue depth (nvme0n4) 00:16:41.321 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.321 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.321 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.321 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.321 fio-3.35 00:16:41.321 Starting 4 threads 00:16:42.725 00:16:42.725 job0: (groupid=0, jobs=1): err= 0: pid=2774018: Mon Jul 15 11:32:16 2024 00:16:42.725 read: IOPS=1381, BW=5527KiB/s (5660kB/s)(5776KiB/1045msec) 00:16:42.725 slat (usec): min=2, max=43665, avg=417.15, stdev=3021.61 00:16:42.725 clat (msec): min=16, max=134, avg=51.54, stdev=31.48 00:16:42.725 lat (msec): min=16, max=154, avg=51.95, stdev=31.80 00:16:42.725 clat percentiles (msec): 00:16:42.725 | 1.00th=[ 18], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 23], 00:16:42.725 | 30.00th=[ 26], 40.00th=[ 34], 50.00th=[ 41], 60.00th=[ 45], 00:16:42.725 | 70.00th=[ 70], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 102], 00:16:42.725 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:16:42.725 | 99.99th=[ 136] 00:16:42.725 write: IOPS=1469, BW=5879KiB/s (6021kB/s)(6144KiB/1045msec); 0 zone resets 00:16:42.725 slat (usec): min=2, max=28147, avg=253.12, stdev=1524.12 00:16:42.725 clat (msec): min=15, max=137, avg=35.55, stdev=27.20 00:16:42.725 lat (msec): min=15, max=137, avg=35.80, stdev=27.34 00:16:42.725 clat percentiles (msec): 00:16:42.725 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:16:42.725 | 30.00th=[ 19], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 33], 00:16:42.725 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 60], 95.00th=[ 121], 00:16:42.725 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 138], 00:16:42.725 | 99.99th=[ 138] 00:16:42.725 bw ( KiB/s): min= 4096, max= 8175, per=15.24%, avg=6135.50, stdev=2884.29, samples=2 00:16:42.725 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:16:42.725 lat (msec) : 20=21.54%, 50=54.06%, 100=16.88%, 250=7.52% 00:16:42.725 cpu : usr=1.44%, sys=2.11%, ctx=133, majf=0, minf=1 00:16:42.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:42.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.725 issued rwts: total=1444,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.725 job1: (groupid=0, jobs=1): err= 0: pid=2774019: Mon Jul 15 11:32:16 2024 00:16:42.725 read: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(13.5MiB/1045msec) 00:16:42.725 slat (usec): min=2, max=11888, avg=137.92, stdev=920.85 00:16:42.725 clat (usec): min=8907, max=67233, avg=19437.58, stdev=8063.51 00:16:42.725 lat (usec): min=8914, max=67237, avg=19575.50, stdev=8112.73 00:16:42.725 clat percentiles (usec): 00:16:42.725 | 1.00th=[ 8979], 5.00th=[13435], 10.00th=[13698], 20.00th=[14746], 00:16:42.725 | 30.00th=[15795], 40.00th=[16909], 50.00th=[17957], 60.00th=[18744], 00:16:42.725 | 70.00th=[20317], 80.00th=[21627], 90.00th=[24249], 95.00th=[27919], 00:16:42.725 | 99.00th=[60031], 99.50th=[60031], 99.90th=[67634], 99.95th=[67634], 00:16:42.725 | 99.99th=[67634] 00:16:42.725 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:16:42.725 slat (usec): min=3, max=14201, avg=136.45, stdev=957.58 00:16:42.725 clat (usec): min=5650, max=54012, avg=18191.30, stdev=7099.50 00:16:42.725 lat (usec): min=5659, max=54018, avg=18327.75, stdev=7178.64 00:16:42.725 clat percentiles (usec): 00:16:42.725 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[12387], 20.00th=[13304], 00:16:42.725 | 30.00th=[13829], 40.00th=[14222], 50.00th=[15008], 60.00th=[16909], 00:16:42.725 | 70.00th=[21627], 80.00th=[22676], 90.00th=[28443], 95.00th=[33424], 00:16:42.725 | 99.00th=[39060], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:16:42.725 | 99.99th=[54264] 00:16:42.725 bw ( KiB/s): min=13277, max=15368, per=35.57%, avg=14322.50, stdev=1478.56, samples=2 00:16:42.725 iops : min= 3319, max= 3842, avg=3580.50, stdev=369.82, samples=2 00:16:42.725 lat (msec) : 10=4.52%, 20=61.52%, 50=32.15%, 100=1.81% 00:16:42.725 cpu : usr=2.78%, sys=4.60%, ctx=220, majf=0, minf=1 00:16:42.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:42.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.725 issued rwts: total=3445,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.725 job2: (groupid=0, jobs=1): err= 0: pid=2774020: Mon Jul 15 11:32:16 2024 00:16:42.725 read: IOPS=2778, BW=10.9MiB/s (11.4MB/s)(11.1MiB/1027msec) 00:16:42.725 slat (nsec): min=1605, max=22425k, avg=165489.58, stdev=983705.85 00:16:42.725 clat (usec): min=6927, max=29953, avg=19982.44, stdev=3252.07 00:16:42.725 lat (usec): min=6992, max=36328, avg=20147.93, stdev=3319.17 00:16:42.725 clat percentiles (usec): 00:16:42.725 | 1.00th=[ 7111], 5.00th=[15401], 10.00th=[16909], 20.00th=[18482], 00:16:42.725 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:16:42.725 | 70.00th=[20317], 80.00th=[20841], 90.00th=[22938], 95.00th=[26084], 00:16:42.725 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:16:42.725 | 99.99th=[30016] 00:16:42.725 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1027msec); 0 zone resets 00:16:42.725 slat (usec): min=2, max=18149, avg=173.02, stdev=778.87 00:16:42.725 clat (msec): min=13, max=110, avg=23.64, stdev=16.73 00:16:42.725 lat (msec): min=13, max=110, avg=23.81, stdev=16.80 00:16:42.725 clat percentiles (msec): 00:16:42.725 | 1.00th=[ 16], 5.00th=[ 19], 10.00th=[ 19], 20.00th=[ 20], 00:16:42.725 | 30.00th=[ 20], 40.00th=[ 20], 50.00th=[ 20], 60.00th=[ 20], 00:16:42.725 | 70.00th=[ 21], 80.00th=[ 21], 90.00th=[ 24], 95.00th=[ 39], 00:16:42.725 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:16:42.725 | 99.99th=[ 111] 00:16:42.725 bw ( KiB/s): min=11848, max=12704, per=30.49%, avg=12276.00, stdev=605.28, samples=2 00:16:42.726 iops : min= 2962, max= 3176, avg=3069.00, stdev=151.32, samples=2 00:16:42.726 lat (msec) : 10=0.71%, 20=59.97%, 50=36.92%, 100=1.06%, 250=1.33% 00:16:42.726 cpu : usr=2.24%, sys=2.63%, ctx=473, majf=0, minf=1 00:16:42.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:42.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.726 issued rwts: total=2854,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.726 job3: (groupid=0, jobs=1): err= 0: pid=2774021: Mon Jul 15 11:32:16 2024 00:16:42.726 read: IOPS=1971, BW=7885KiB/s (8074kB/s)(8192KiB/1039msec) 00:16:42.726 slat (usec): min=2, max=26298, avg=262.29, stdev=1900.46 00:16:42.726 clat (usec): min=10113, max=55616, avg=31478.66, stdev=7092.95 00:16:42.726 lat (usec): min=10119, max=55647, avg=31740.95, stdev=7222.01 00:16:42.726 clat percentiles (usec): 00:16:42.726 | 1.00th=[10683], 5.00th=[24511], 10.00th=[28443], 20.00th=[28967], 00:16:42.726 | 30.00th=[29230], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:16:42.726 | 70.00th=[29754], 80.00th=[32900], 90.00th=[43779], 95.00th=[47449], 00:16:42.726 | 99.00th=[53216], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:16:42.726 | 99.99th=[55837] 00:16:42.726 write: IOPS=2238, BW=8955KiB/s (9170kB/s)(9304KiB/1039msec); 0 zone resets 00:16:42.726 slat (usec): min=3, max=23845, avg=198.99, stdev=1151.61 00:16:42.726 clat (usec): min=1510, max=68410, avg=28939.49, stdev=8619.90 00:16:42.726 lat (usec): min=1522, max=68418, avg=29138.48, stdev=8709.00 00:16:42.726 clat percentiles (usec): 00:16:42.726 | 1.00th=[ 7767], 5.00th=[13435], 10.00th=[19268], 20.00th=[26346], 00:16:42.726 | 30.00th=[28181], 40.00th=[28443], 50.00th=[29754], 60.00th=[30016], 00:16:42.726 | 70.00th=[30802], 80.00th=[31065], 90.00th=[33162], 95.00th=[43779], 00:16:42.726 | 99.00th=[62653], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:16:42.726 | 99.99th=[68682] 00:16:42.726 bw ( KiB/s): min= 8704, max= 8862, per=21.82%, avg=8783.00, stdev=111.72, samples=2 00:16:42.726 iops : min= 2176, max= 2215, avg=2195.50, stdev=27.58, samples=2 00:16:42.726 lat (msec) : 2=0.05%, 10=1.46%, 20=5.58%, 50=89.57%, 100=3.34% 00:16:42.726 cpu : usr=2.89%, sys=2.60%, ctx=270, majf=0, minf=1 00:16:42.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:42.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.726 issued rwts: total=2048,2326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.726 00:16:42.726 Run status group 0 (all jobs): 00:16:42.726 READ: bw=36.6MiB/s (38.4MB/s), 5527KiB/s-12.9MiB/s (5660kB/s-13.5MB/s), io=38.2MiB (40.1MB), run=1027-1045msec 00:16:42.726 WRITE: bw=39.3MiB/s (41.2MB/s), 5879KiB/s-13.4MiB/s (6021kB/s-14.0MB/s), io=41.1MiB (43.1MB), run=1027-1045msec 00:16:42.726 00:16:42.726 Disk stats (read/write): 00:16:42.726 nvme0n1: ios=1201/1536, merge=0/0, ticks=18573/22213, in_queue=40786, util=96.19% 00:16:42.726 nvme0n2: ios=2591/3053, merge=0/0, ticks=30443/34823, in_queue=65266, util=97.75% 00:16:42.726 nvme0n3: ios=2180/2560, merge=0/0, ticks=15291/19785, in_queue=35076, util=96.07% 00:16:42.726 nvme0n4: ios=1593/1983, merge=0/0, ticks=48258/54373, in_queue=102631, util=95.91% 00:16:42.726 11:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:42.726 [global] 00:16:42.726 thread=1 00:16:42.726 invalidate=1 00:16:42.726 rw=randwrite 00:16:42.726 time_based=1 00:16:42.726 runtime=1 00:16:42.726 ioengine=libaio 00:16:42.726 direct=1 00:16:42.726 bs=4096 00:16:42.726 iodepth=128 00:16:42.726 norandommap=0 00:16:42.726 numjobs=1 00:16:42.726 00:16:42.726 verify_dump=1 00:16:42.726 verify_backlog=512 00:16:42.726 verify_state_save=0 00:16:42.726 do_verify=1 00:16:42.726 verify=crc32c-intel 00:16:42.726 [job0] 00:16:42.726 filename=/dev/nvme0n1 00:16:42.726 [job1] 00:16:42.726 filename=/dev/nvme0n2 00:16:42.726 [job2] 00:16:42.726 filename=/dev/nvme0n3 00:16:42.726 [job3] 00:16:42.726 filename=/dev/nvme0n4 00:16:42.726 Could not set queue depth (nvme0n1) 00:16:42.726 Could not set queue depth (nvme0n2) 00:16:42.726 Could not set queue depth (nvme0n3) 00:16:42.726 Could not set queue depth (nvme0n4) 00:16:42.988 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:42.989 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:42.989 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:42.989 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:42.989 fio-3.35 00:16:42.989 Starting 4 threads 00:16:44.364 00:16:44.364 job0: (groupid=0, jobs=1): err= 0: pid=2774463: Mon Jul 15 11:32:18 2024 00:16:44.364 read: IOPS=3317, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1010msec) 00:16:44.364 slat (usec): min=2, max=23797, avg=153.47, stdev=1063.60 00:16:44.364 clat (usec): min=1095, max=66004, avg=18742.05, stdev=6917.25 00:16:44.364 lat (usec): min=10669, max=67092, avg=18895.51, stdev=6983.63 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[11731], 5.00th=[12780], 10.00th=[14091], 20.00th=[16450], 00:16:44.364 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:16:44.364 | 70.00th=[17433], 80.00th=[18482], 90.00th=[22152], 95.00th=[41681], 00:16:44.364 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46924], 99.95th=[64750], 00:16:44.364 | 99.99th=[65799] 00:16:44.364 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:16:44.364 slat (usec): min=3, max=19379, avg=122.99, stdev=589.04 00:16:44.364 clat (usec): min=833, max=59772, avg=18170.15, stdev=4758.56 00:16:44.364 lat (usec): min=1658, max=59781, avg=18293.14, stdev=4785.50 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[14484], 20.00th=[16450], 00:16:44.364 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:16:44.364 | 70.00th=[17695], 80.00th=[20055], 90.00th=[22414], 95.00th=[24511], 00:16:44.364 | 99.00th=[39584], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:16:44.364 | 99.99th=[60031] 00:16:44.364 bw ( KiB/s): min=13000, max=15672, per=36.51%, avg=14336.00, stdev=1889.39, samples=2 00:16:44.364 iops : min= 3250, max= 3918, avg=3584.00, stdev=472.35, samples=2 00:16:44.364 lat (usec) : 1000=0.01% 00:16:44.364 lat (msec) : 2=0.01%, 10=1.69%, 20=79.60%, 50=18.64%, 100=0.04% 00:16:44.364 cpu : usr=3.47%, sys=4.66%, ctx=465, majf=0, minf=1 00:16:44.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.364 issued rwts: total=3351,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.364 job1: (groupid=0, jobs=1): err= 0: pid=2774470: Mon Jul 15 11:32:18 2024 00:16:44.364 read: IOPS=2450, BW=9802KiB/s (10.0MB/s)(9.98MiB/1043msec) 00:16:44.364 slat (nsec): min=1980, max=22301k, avg=204342.12, stdev=1537298.24 00:16:44.364 clat (usec): min=9147, max=65579, avg=28194.01, stdev=9205.83 00:16:44.364 lat (usec): min=9164, max=65584, avg=28398.36, stdev=9264.52 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[12256], 5.00th=[18482], 10.00th=[21890], 20.00th=[23987], 00:16:44.364 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:16:44.364 | 70.00th=[26870], 80.00th=[32900], 90.00th=[41681], 95.00th=[46924], 00:16:44.364 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:16:44.364 | 99.99th=[65799] 00:16:44.364 write: IOPS=2454, BW=9818KiB/s (10.1MB/s)(10.0MiB/1043msec); 0 zone resets 00:16:44.364 slat (usec): min=3, max=23105, avg=164.91, stdev=1329.26 00:16:44.364 clat (usec): min=2977, max=47094, avg=23561.03, stdev=4747.65 00:16:44.364 lat (usec): min=2984, max=47121, avg=23725.94, stdev=4957.53 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[ 3359], 5.00th=[14877], 10.00th=[18220], 20.00th=[21365], 00:16:44.364 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:16:44.364 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:16:44.364 | 99.00th=[36439], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:16:44.364 | 99.99th=[46924] 00:16:44.364 bw ( KiB/s): min= 8200, max=12280, per=26.08%, avg=10240.00, stdev=2885.00, samples=2 00:16:44.364 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:16:44.364 lat (msec) : 4=0.55%, 10=0.86%, 20=11.86%, 50=84.89%, 100=1.84% 00:16:44.364 cpu : usr=2.02%, sys=3.17%, ctx=211, majf=0, minf=1 00:16:44.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.364 issued rwts: total=2556,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.364 job2: (groupid=0, jobs=1): err= 0: pid=2774494: Mon Jul 15 11:32:18 2024 00:16:44.364 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(9.97MiB/1010msec) 00:16:44.364 slat (usec): min=2, max=19476, avg=192.31, stdev=1266.38 00:16:44.364 clat (usec): min=6062, max=66685, avg=22346.83, stdev=9367.28 00:16:44.364 lat (usec): min=8457, max=66694, avg=22539.14, stdev=9484.66 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[10814], 5.00th=[16450], 10.00th=[16909], 20.00th=[17695], 00:16:44.364 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19792], 00:16:44.364 | 70.00th=[20317], 80.00th=[23200], 90.00th=[34341], 95.00th=[47449], 00:16:44.364 | 99.00th=[60031], 99.50th=[64226], 99.90th=[66847], 99.95th=[66847], 00:16:44.364 | 99.99th=[66847] 00:16:44.364 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:16:44.364 slat (usec): min=3, max=14312, avg=192.38, stdev=978.51 00:16:44.364 clat (usec): min=1561, max=66639, avg=27749.79, stdev=13094.00 00:16:44.364 lat (usec): min=1576, max=66646, avg=27942.17, stdev=13192.06 00:16:44.364 clat percentiles (usec): 00:16:44.364 | 1.00th=[ 6063], 5.00th=[13698], 10.00th=[14484], 20.00th=[16450], 00:16:44.364 | 30.00th=[16909], 40.00th=[18482], 50.00th=[26870], 60.00th=[32113], 00:16:44.364 | 70.00th=[32375], 80.00th=[38536], 90.00th=[46924], 95.00th=[54264], 00:16:44.364 | 99.00th=[60556], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:16:44.364 | 99.99th=[66847] 00:16:44.364 bw ( KiB/s): min= 8240, max=12240, per=26.08%, avg=10240.00, stdev=2828.43, samples=2 00:16:44.364 iops : min= 2060, max= 3060, avg=2560.00, stdev=707.11, samples=2 00:16:44.364 lat (msec) : 2=0.18%, 4=0.16%, 10=1.43%, 20=50.38%, 50=42.54% 00:16:44.364 lat (msec) : 100=5.32% 00:16:44.364 cpu : usr=2.97%, sys=3.47%, ctx=270, majf=0, minf=1 00:16:44.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.364 issued rwts: total=2553,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.364 job3: (groupid=0, jobs=1): err= 0: pid=2774502: Mon Jul 15 11:32:18 2024 00:16:44.364 read: IOPS=1291, BW=5164KiB/s (5288kB/s)(5216KiB/1010msec) 00:16:44.364 slat (usec): min=2, max=31152, avg=402.81, stdev=2326.11 00:16:44.364 clat (usec): min=1645, max=152606, avg=55499.34, stdev=32732.96 00:16:44.364 lat (msec): min=10, max=152, avg=55.90, stdev=32.97 00:16:44.364 clat percentiles (msec): 00:16:44.364 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 25], 00:16:44.364 | 30.00th=[ 26], 40.00th=[ 35], 50.00th=[ 44], 60.00th=[ 64], 00:16:44.364 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 111], 00:16:44.364 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:16:44.364 | 99.99th=[ 153] 00:16:44.364 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:16:44.364 slat (usec): min=4, max=17072, avg=305.51, stdev=1390.36 00:16:44.364 clat (msec): min=18, max=115, avg=35.63, stdev=18.93 00:16:44.364 lat (msec): min=18, max=115, avg=35.93, stdev=19.08 00:16:44.364 clat percentiles (msec): 00:16:44.364 | 1.00th=[ 22], 5.00th=[ 22], 10.00th=[ 22], 20.00th=[ 23], 00:16:44.364 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:16:44.364 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 63], 95.00th=[ 87], 00:16:44.364 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 116], 99.95th=[ 116], 00:16:44.364 | 99.99th=[ 116] 00:16:44.364 bw ( KiB/s): min= 4096, max= 8192, per=15.65%, avg=6144.00, stdev=2896.31, samples=2 00:16:44.364 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:16:44.364 lat (msec) : 2=0.04%, 20=1.48%, 50=70.53%, 100=20.39%, 250=7.57% 00:16:44.364 cpu : usr=1.58%, sys=2.57%, ctx=196, majf=0, minf=1 00:16:44.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.364 issued rwts: total=1304,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.364 00:16:44.364 Run status group 0 (all jobs): 00:16:44.364 READ: bw=36.6MiB/s (38.3MB/s), 5164KiB/s-13.0MiB/s (5288kB/s-13.6MB/s), io=38.1MiB (40.0MB), run=1010-1043msec 00:16:44.364 WRITE: bw=38.4MiB/s (40.2MB/s), 6083KiB/s-13.9MiB/s (6229kB/s-14.5MB/s), io=40.0MiB (41.9MB), run=1010-1043msec 00:16:44.364 00:16:44.364 Disk stats (read/write): 00:16:44.364 nvme0n1: ios=2953/3072, merge=0/0, ticks=26166/27848, in_queue=54014, util=91.98% 00:16:44.364 nvme0n2: ios=2098/2175, merge=0/0, ticks=52860/48949, in_queue=101809, util=89.99% 00:16:44.364 nvme0n3: ios=2092/2239, merge=0/0, ticks=42813/58763, in_queue=101576, util=94.05% 00:16:44.364 nvme0n4: ios=1044/1295, merge=0/0, ticks=17946/17106, in_queue=35052, util=96.34% 00:16:44.364 11:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:44.364 11:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2774707 00:16:44.364 11:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:44.364 11:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:44.364 [global] 00:16:44.364 thread=1 00:16:44.364 invalidate=1 00:16:44.364 rw=read 00:16:44.364 time_based=1 00:16:44.364 runtime=10 00:16:44.364 ioengine=libaio 00:16:44.364 direct=1 00:16:44.364 bs=4096 00:16:44.364 iodepth=1 00:16:44.364 norandommap=1 00:16:44.364 numjobs=1 00:16:44.364 00:16:44.364 [job0] 00:16:44.364 filename=/dev/nvme0n1 00:16:44.364 [job1] 00:16:44.364 filename=/dev/nvme0n2 00:16:44.364 [job2] 00:16:44.364 filename=/dev/nvme0n3 00:16:44.364 [job3] 00:16:44.364 filename=/dev/nvme0n4 00:16:44.364 Could not set queue depth (nvme0n1) 00:16:44.364 Could not set queue depth (nvme0n2) 00:16:44.364 Could not set queue depth (nvme0n3) 00:16:44.364 Could not set queue depth (nvme0n4) 00:16:44.675 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.675 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.675 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.675 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.675 fio-3.35 00:16:44.675 Starting 4 threads 00:16:47.235 11:32:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:47.493 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18890752, buflen=4096 00:16:47.493 fio: pid=2775003, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:47.493 11:32:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:47.752 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:47.752 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:47.752 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=18288640, buflen=4096 00:16:47.752 fio: pid=2774995, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:48.010 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12349440, buflen=4096 00:16:48.010 fio: pid=2774957, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:48.010 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.010 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:48.269 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18448384, buflen=4096 00:16:48.269 fio: pid=2774971, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:48.269 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.269 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:48.269 00:16:48.269 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2774957: Mon Jul 15 11:32:22 2024 00:16:48.269 read: IOPS=945, BW=3779KiB/s (3870kB/s)(11.8MiB/3191msec) 00:16:48.269 slat (usec): min=6, max=17542, avg=19.05, stdev=424.12 00:16:48.269 clat (usec): min=296, max=42046, avg=1028.72, stdev=4930.90 00:16:48.269 lat (usec): min=303, max=42058, avg=1047.77, stdev=4948.65 00:16:48.269 clat percentiles (usec): 00:16:48.269 | 1.00th=[ 326], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 388], 00:16:48.269 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 416], 00:16:48.269 | 70.00th=[ 433], 80.00th=[ 453], 90.00th=[ 506], 95.00th=[ 570], 00:16:48.269 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:48.269 | 99.99th=[42206] 00:16:48.269 bw ( KiB/s): min= 96, max= 9408, per=19.61%, avg=3772.83, stdev=4417.68, samples=6 00:16:48.269 iops : min= 24, max= 2352, avg=943.17, stdev=1104.45, samples=6 00:16:48.269 lat (usec) : 500=89.42%, 750=8.89%, 1000=0.13% 00:16:48.269 lat (msec) : 2=0.03%, 50=1.49% 00:16:48.269 cpu : usr=0.47%, sys=1.60%, ctx=3019, majf=0, minf=1 00:16:48.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 issued rwts: total=3016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.269 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2774971: Mon Jul 15 11:32:22 2024 00:16:48.269 read: IOPS=1305, BW=5221KiB/s (5346kB/s)(17.6MiB/3451msec) 00:16:48.269 slat (usec): min=7, max=19397, avg=22.98, stdev=408.86 00:16:48.269 clat (usec): min=325, max=42014, avg=735.99, stdev=3516.25 00:16:48.269 lat (usec): min=334, max=42041, avg=758.97, stdev=3539.72 00:16:48.269 clat percentiles (usec): 00:16:48.269 | 1.00th=[ 347], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 396], 00:16:48.269 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:16:48.269 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 478], 95.00th=[ 545], 00:16:48.269 | 99.00th=[ 603], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:16:48.269 | 99.99th=[42206] 00:16:48.269 bw ( KiB/s): min= 120, max= 9164, per=24.96%, avg=4802.00, stdev=4626.95, samples=6 00:16:48.269 iops : min= 30, max= 2291, avg=1200.50, stdev=1156.74, samples=6 00:16:48.269 lat (usec) : 500=93.23%, 750=5.97% 00:16:48.269 lat (msec) : 2=0.02%, 50=0.75% 00:16:48.269 cpu : usr=0.96%, sys=2.20%, ctx=4511, majf=0, minf=1 00:16:48.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 issued rwts: total=4505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.269 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2774995: Mon Jul 15 11:32:22 2024 00:16:48.269 read: IOPS=1516, BW=6067KiB/s (6212kB/s)(17.4MiB/2944msec) 00:16:48.269 slat (nsec): min=7138, max=39446, avg=8159.65, stdev=1769.78 00:16:48.269 clat (usec): min=270, max=42009, avg=644.33, stdev=3366.47 00:16:48.269 lat (usec): min=278, max=42031, avg=652.48, stdev=3367.59 00:16:48.269 clat percentiles (usec): 00:16:48.269 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 330], 00:16:48.269 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 367], 00:16:48.269 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:16:48.269 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:16:48.269 | 99.99th=[42206] 00:16:48.269 bw ( KiB/s): min= 104, max=10792, per=33.42%, avg=6428.80, stdev=5599.59, samples=5 00:16:48.269 iops : min= 26, max= 2698, avg=1607.20, stdev=1399.90, samples=5 00:16:48.269 lat (usec) : 500=97.22%, 750=2.04% 00:16:48.269 lat (msec) : 2=0.02%, 50=0.69% 00:16:48.269 cpu : usr=1.19%, sys=2.14%, ctx=4466, majf=0, minf=1 00:16:48.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 issued rwts: total=4466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.269 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2775003: Mon Jul 15 11:32:22 2024 00:16:48.269 read: IOPS=1736, BW=6946KiB/s (7112kB/s)(18.0MiB/2656msec) 00:16:48.269 slat (nsec): min=7146, max=40816, avg=8179.76, stdev=1718.07 00:16:48.269 clat (usec): min=257, max=42039, avg=563.51, stdev=2981.37 00:16:48.269 lat (usec): min=264, max=42059, avg=571.69, stdev=2982.33 00:16:48.269 clat percentiles (usec): 00:16:48.269 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:16:48.269 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 343], 00:16:48.269 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 453], 00:16:48.269 | 99.00th=[ 523], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:16:48.269 | 99.99th=[42206] 00:16:48.269 bw ( KiB/s): min= 104, max=11784, per=35.02%, avg=6737.60, stdev=5984.03, samples=5 00:16:48.269 iops : min= 26, max= 2946, avg=1684.40, stdev=1496.01, samples=5 00:16:48.269 lat (usec) : 500=98.03%, 750=1.41% 00:16:48.269 lat (msec) : 50=0.54% 00:16:48.269 cpu : usr=0.75%, sys=3.05%, ctx=4613, majf=0, minf=2 00:16:48.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.269 issued rwts: total=4613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.269 00:16:48.269 Run status group 0 (all jobs): 00:16:48.269 READ: bw=18.8MiB/s (19.7MB/s), 3779KiB/s-6946KiB/s (3870kB/s-7112kB/s), io=64.8MiB (68.0MB), run=2656-3451msec 00:16:48.269 00:16:48.269 Disk stats (read/write): 00:16:48.269 nvme0n1: ios=2853/0, merge=0/0, ticks=2998/0, in_queue=2998, util=94.36% 00:16:48.269 nvme0n2: ios=4263/0, merge=0/0, ticks=3169/0, in_queue=3169, util=94.16% 00:16:48.269 nvme0n3: ios=4462/0, merge=0/0, ticks=2710/0, in_queue=2710, util=96.39% 00:16:48.269 nvme0n4: ios=4432/0, merge=0/0, ticks=2476/0, in_queue=2476, util=96.45% 00:16:48.528 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.528 11:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:48.787 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.787 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:49.045 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:49.045 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:49.303 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:49.303 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:49.562 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:49.562 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2774707 00:16:49.562 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:49.562 11:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:49.821 nvmf hotplug test: fio failed as expected 00:16:49.821 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:50.080 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.081 rmmod nvme_tcp 00:16:50.081 rmmod nvme_fabrics 00:16:50.081 rmmod nvme_keyring 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2771393 ']' 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2771393 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2771393 ']' 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2771393 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2771393 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2771393' 00:16:50.081 killing process with pid 2771393 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2771393 00:16:50.081 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2771393 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.340 11:32:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.877 11:32:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.877 00:16:52.877 real 0m28.885s 00:16:52.877 user 2m25.863s 00:16:52.877 sys 0m8.635s 00:16:52.877 11:32:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:52.877 11:32:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.877 ************************************ 00:16:52.877 END TEST nvmf_fio_target 00:16:52.877 ************************************ 00:16:52.877 11:32:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:52.877 11:32:26 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:52.877 11:32:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:52.877 11:32:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.877 11:32:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:52.877 ************************************ 00:16:52.877 START TEST nvmf_bdevio 00:16:52.877 ************************************ 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:52.877 * Looking for test storage... 00:16:52.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.877 11:32:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.445 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:59.446 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:59.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:59.446 Found net devices under 0000:af:00.0: cvl_0_0 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:59.446 Found net devices under 0000:af:00.1: cvl_0_1 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:16:59.446 00:16:59.446 --- 10.0.0.2 ping statistics --- 00:16:59.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.446 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:16:59.446 00:16:59.446 --- 10.0.0.1 ping statistics --- 00:16:59.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.446 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2779658 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2779658 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2779658 ']' 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.446 11:32:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.446 [2024-07-15 11:32:33.010369] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:16:59.446 [2024-07-15 11:32:33.010432] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.446 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.446 [2024-07-15 11:32:33.128802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.446 [2024-07-15 11:32:33.278169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.446 [2024-07-15 11:32:33.278237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.446 [2024-07-15 11:32:33.278269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.446 [2024-07-15 11:32:33.278289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.446 [2024-07-15 11:32:33.278304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.446 [2024-07-15 11:32:33.278449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.446 [2024-07-15 11:32:33.278567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:59.446 [2024-07-15 11:32:33.278682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:59.446 [2024-07-15 11:32:33.278687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.446 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.447 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 [2024-07-15 11:32:33.911924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 Malloc0 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 [2024-07-15 11:32:33.976489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.706 { 00:16:59.706 "params": { 00:16:59.706 "name": "Nvme$subsystem", 00:16:59.706 "trtype": "$TEST_TRANSPORT", 00:16:59.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.706 "adrfam": "ipv4", 00:16:59.706 "trsvcid": "$NVMF_PORT", 00:16:59.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.706 "hdgst": ${hdgst:-false}, 00:16:59.706 "ddgst": ${ddgst:-false} 00:16:59.706 }, 00:16:59.706 "method": "bdev_nvme_attach_controller" 00:16:59.706 } 00:16:59.706 EOF 00:16:59.706 )") 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:59.706 11:32:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.706 "params": { 00:16:59.706 "name": "Nvme1", 00:16:59.706 "trtype": "tcp", 00:16:59.706 "traddr": "10.0.0.2", 00:16:59.706 "adrfam": "ipv4", 00:16:59.706 "trsvcid": "4420", 00:16:59.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.706 "hdgst": false, 00:16:59.706 "ddgst": false 00:16:59.706 }, 00:16:59.706 "method": "bdev_nvme_attach_controller" 00:16:59.706 }' 00:16:59.706 [2024-07-15 11:32:34.025161] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:16:59.706 [2024-07-15 11:32:34.025203] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779750 ] 00:16:59.706 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.706 [2024-07-15 11:32:34.095519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.965 [2024-07-15 11:32:34.189309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.965 [2024-07-15 11:32:34.189342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.965 [2024-07-15 11:32:34.189345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.223 I/O targets: 00:17:00.223 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:00.223 00:17:00.223 00:17:00.223 CUnit - A unit testing framework for C - Version 2.1-3 00:17:00.223 http://cunit.sourceforge.net/ 00:17:00.223 00:17:00.223 00:17:00.223 Suite: bdevio tests on: Nvme1n1 00:17:00.223 Test: blockdev write read block ...passed 00:17:00.223 Test: blockdev write zeroes read block ...passed 00:17:00.223 Test: blockdev write zeroes read no split ...passed 00:17:00.223 Test: blockdev write zeroes read split ...passed 00:17:00.223 Test: blockdev write zeroes read split partial ...passed 00:17:00.223 Test: blockdev reset ...[2024-07-15 11:32:34.677224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:00.223 [2024-07-15 11:32:34.677305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b79c80 (9): Bad file descriptor 00:17:00.482 [2024-07-15 11:32:34.695179] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:00.482 passed 00:17:00.482 Test: blockdev write read 8 blocks ...passed 00:17:00.482 Test: blockdev write read size > 128k ...passed 00:17:00.482 Test: blockdev write read invalid size ...passed 00:17:00.482 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:00.482 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:00.482 Test: blockdev write read max offset ...passed 00:17:00.482 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:00.482 Test: blockdev writev readv 8 blocks ...passed 00:17:00.742 Test: blockdev writev readv 30 x 1block ...passed 00:17:00.742 Test: blockdev writev readv block ...passed 00:17:00.742 Test: blockdev writev readv size > 128k ...passed 00:17:00.742 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:00.742 Test: blockdev comparev and writev ...[2024-07-15 11:32:35.039357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.039422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.040205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.040237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.040283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.040306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.041019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.041050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.041087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.041108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.041805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.041836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.041874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:00.742 [2024-07-15 11:32:35.041905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:00.742 passed 00:17:00.742 Test: blockdev nvme passthru rw ...passed 00:17:00.742 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:32:35.124931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.742 [2024-07-15 11:32:35.124969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.125233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.742 [2024-07-15 11:32:35.125273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:00.742 [2024-07-15 11:32:35.125543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.743 [2024-07-15 11:32:35.125573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:00.743 [2024-07-15 11:32:35.125822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:00.743 [2024-07-15 11:32:35.125852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:00.743 passed 00:17:00.743 Test: blockdev nvme admin passthru ...passed 00:17:00.743 Test: blockdev copy ...passed 00:17:00.743 00:17:00.743 Run Summary: Type Total Ran Passed Failed Inactive 00:17:00.743 suites 1 1 n/a 0 0 00:17:00.743 tests 23 23 23 0 0 00:17:00.743 asserts 152 152 152 0 n/a 00:17:00.743 00:17:00.743 Elapsed time = 1.242 seconds 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.001 rmmod nvme_tcp 00:17:01.001 rmmod nvme_fabrics 00:17:01.001 rmmod nvme_keyring 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2779658 ']' 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2779658 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2779658 ']' 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2779658 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.001 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2779658 00:17:01.260 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:01.260 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:01.260 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2779658' 00:17:01.260 killing process with pid 2779658 00:17:01.260 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2779658 00:17:01.260 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2779658 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.519 11:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.425 11:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.425 00:17:03.425 real 0m11.009s 00:17:03.425 user 0m14.351s 00:17:03.425 sys 0m5.063s 00:17:03.425 11:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:03.425 11:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.425 ************************************ 00:17:03.425 END TEST nvmf_bdevio 00:17:03.425 ************************************ 00:17:03.684 11:32:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:03.684 11:32:37 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.684 11:32:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:03.684 11:32:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.684 11:32:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 ************************************ 00:17:03.684 START TEST nvmf_auth_target 00:17:03.684 ************************************ 00:17:03.684 11:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.684 * Looking for test storage... 00:17:03.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.684 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:10.258 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:10.258 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:10.258 Found net devices under 0000:af:00.0: cvl_0_0 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:10.258 Found net devices under 0000:af:00.1: cvl_0_1 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.258 11:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:10.258 00:17:10.258 --- 10.0.0.2 ping statistics --- 00:17:10.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.258 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:17:10.258 00:17:10.258 --- 10.0.0.1 ping statistics --- 00:17:10.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.258 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.258 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2783669 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2783669 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2783669 ']' 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2783706 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=367a70d8d3fa3225045b7b70ec7922e497ad5c0565899a9b 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8KE 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 367a70d8d3fa3225045b7b70ec7922e497ad5c0565899a9b 0 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 367a70d8d3fa3225045b7b70ec7922e497ad5c0565899a9b 0 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=367a70d8d3fa3225045b7b70ec7922e497ad5c0565899a9b 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8KE 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8KE 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8KE 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44553967bb36dfef821e7f15a777c64d638c10eec46c7a538b79930bb114c5ac 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lxA 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44553967bb36dfef821e7f15a777c64d638c10eec46c7a538b79930bb114c5ac 3 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44553967bb36dfef821e7f15a777c64d638c10eec46c7a538b79930bb114c5ac 3 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44553967bb36dfef821e7f15a777c64d638c10eec46c7a538b79930bb114c5ac 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lxA 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lxA 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.lxA 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=649f9f45e98f11175d5e6b09613e3b7c 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fPe 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 649f9f45e98f11175d5e6b09613e3b7c 1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 649f9f45e98f11175d5e6b09613e3b7c 1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=649f9f45e98f11175d5e6b09613e3b7c 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fPe 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fPe 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fPe 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=209893f334169f7ad41e81973cf7e04e37e027bf69c217cc 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OP7 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 209893f334169f7ad41e81973cf7e04e37e027bf69c217cc 2 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 209893f334169f7ad41e81973cf7e04e37e027bf69c217cc 2 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=209893f334169f7ad41e81973cf7e04e37e027bf69c217cc 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:10.259 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OP7 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OP7 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.OP7 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54fec94462c3e0d629799a91a8b6c99c7044744b89b795a8 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cWK 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54fec94462c3e0d629799a91a8b6c99c7044744b89b795a8 2 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54fec94462c3e0d629799a91a8b6c99c7044744b89b795a8 2 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54fec94462c3e0d629799a91a8b6c99c7044744b89b795a8 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cWK 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cWK 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.cWK 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=88adb470421233727e5eec8d72805402 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VTP 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 88adb470421233727e5eec8d72805402 1 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 88adb470421233727e5eec8d72805402 1 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=88adb470421233727e5eec8d72805402 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:10.519 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VTP 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VTP 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.VTP 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a1f9a34f34a1d379d65fe686bc437c1e3c6cd562b7eb8cbaddbbd590e58d3df6 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gMf 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a1f9a34f34a1d379d65fe686bc437c1e3c6cd562b7eb8cbaddbbd590e58d3df6 3 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a1f9a34f34a1d379d65fe686bc437c1e3c6cd562b7eb8cbaddbbd590e58d3df6 3 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a1f9a34f34a1d379d65fe686bc437c1e3c6cd562b7eb8cbaddbbd590e58d3df6 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gMf 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gMf 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gMf 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2783669 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2783669 ']' 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.520 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2783706 /var/tmp/host.sock 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2783706 ']' 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:10.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.779 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8KE 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8KE 00:17:11.039 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8KE 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.lxA ]] 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lxA 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lxA 00:17:11.298 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lxA 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fPe 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fPe 00:17:11.558 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fPe 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.OP7 ]] 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OP7 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OP7 00:17:11.818 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OP7 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cWK 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cWK 00:17:12.077 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cWK 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.VTP ]] 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VTP 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VTP 00:17:12.335 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VTP 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gMf 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gMf 00:17:12.594 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gMf 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.852 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.110 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.368 00:17:13.369 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.369 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.369 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.627 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.627 { 00:17:13.627 "cntlid": 1, 00:17:13.627 "qid": 0, 00:17:13.627 "state": "enabled", 00:17:13.627 "thread": "nvmf_tgt_poll_group_000", 00:17:13.627 "listen_address": { 00:17:13.627 "trtype": "TCP", 00:17:13.627 "adrfam": "IPv4", 00:17:13.628 "traddr": "10.0.0.2", 00:17:13.628 "trsvcid": "4420" 00:17:13.628 }, 00:17:13.628 "peer_address": { 00:17:13.628 "trtype": "TCP", 00:17:13.628 "adrfam": "IPv4", 00:17:13.628 "traddr": "10.0.0.1", 00:17:13.628 "trsvcid": "58230" 00:17:13.628 }, 00:17:13.628 "auth": { 00:17:13.628 "state": "completed", 00:17:13.628 "digest": "sha256", 00:17:13.628 "dhgroup": "null" 00:17:13.628 } 00:17:13.628 } 00:17:13.628 ]' 00:17:13.628 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.628 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.628 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.886 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.886 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.886 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.886 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.886 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.143 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.709 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.967 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.226 00:17:15.485 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.485 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.485 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.744 { 00:17:15.744 "cntlid": 3, 00:17:15.744 "qid": 0, 00:17:15.744 "state": "enabled", 00:17:15.744 "thread": "nvmf_tgt_poll_group_000", 00:17:15.744 "listen_address": { 00:17:15.744 "trtype": "TCP", 00:17:15.744 "adrfam": "IPv4", 00:17:15.744 "traddr": "10.0.0.2", 00:17:15.744 "trsvcid": "4420" 00:17:15.744 }, 00:17:15.744 "peer_address": { 00:17:15.744 "trtype": "TCP", 00:17:15.744 "adrfam": "IPv4", 00:17:15.744 "traddr": "10.0.0.1", 00:17:15.744 "trsvcid": "58248" 00:17:15.744 }, 00:17:15.744 "auth": { 00:17:15.744 "state": "completed", 00:17:15.744 "digest": "sha256", 00:17:15.744 "dhgroup": "null" 00:17:15.744 } 00:17:15.744 } 00:17:15.744 ]' 00:17:15.744 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.744 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.002 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.938 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.196 00:17:17.196 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.196 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.196 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.454 { 00:17:17.454 "cntlid": 5, 00:17:17.454 "qid": 0, 00:17:17.454 "state": "enabled", 00:17:17.454 "thread": "nvmf_tgt_poll_group_000", 00:17:17.454 "listen_address": { 00:17:17.454 "trtype": "TCP", 00:17:17.454 "adrfam": "IPv4", 00:17:17.454 "traddr": "10.0.0.2", 00:17:17.454 "trsvcid": "4420" 00:17:17.454 }, 00:17:17.454 "peer_address": { 00:17:17.454 "trtype": "TCP", 00:17:17.454 "adrfam": "IPv4", 00:17:17.454 "traddr": "10.0.0.1", 00:17:17.454 "trsvcid": "58274" 00:17:17.454 }, 00:17:17.454 "auth": { 00:17:17.454 "state": "completed", 00:17:17.454 "digest": "sha256", 00:17:17.454 "dhgroup": "null" 00:17:17.454 } 00:17:17.454 } 00:17:17.454 ]' 00:17:17.454 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.712 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.712 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.712 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:17.712 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.712 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.712 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.712 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.970 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.911 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.912 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.912 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.169 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.426 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.685 { 00:17:19.685 "cntlid": 7, 00:17:19.685 "qid": 0, 00:17:19.685 "state": "enabled", 00:17:19.685 "thread": "nvmf_tgt_poll_group_000", 00:17:19.685 "listen_address": { 00:17:19.685 "trtype": "TCP", 00:17:19.685 "adrfam": "IPv4", 00:17:19.685 "traddr": "10.0.0.2", 00:17:19.685 "trsvcid": "4420" 00:17:19.685 }, 00:17:19.685 "peer_address": { 00:17:19.685 "trtype": "TCP", 00:17:19.685 "adrfam": "IPv4", 00:17:19.685 "traddr": "10.0.0.1", 00:17:19.685 "trsvcid": "52442" 00:17:19.685 }, 00:17:19.685 "auth": { 00:17:19.685 "state": "completed", 00:17:19.685 "digest": "sha256", 00:17:19.685 "dhgroup": "null" 00:17:19.685 } 00:17:19.685 } 00:17:19.685 ]' 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:19.685 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.685 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.685 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.685 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.944 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.965 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.245 00:17:21.245 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.245 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.245 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.503 { 00:17:21.503 "cntlid": 9, 00:17:21.503 "qid": 0, 00:17:21.503 "state": "enabled", 00:17:21.503 "thread": "nvmf_tgt_poll_group_000", 00:17:21.503 "listen_address": { 00:17:21.503 "trtype": "TCP", 00:17:21.503 "adrfam": "IPv4", 00:17:21.503 "traddr": "10.0.0.2", 00:17:21.503 "trsvcid": "4420" 00:17:21.503 }, 00:17:21.503 "peer_address": { 00:17:21.503 "trtype": "TCP", 00:17:21.503 "adrfam": "IPv4", 00:17:21.503 "traddr": "10.0.0.1", 00:17:21.503 "trsvcid": "52476" 00:17:21.503 }, 00:17:21.503 "auth": { 00:17:21.503 "state": "completed", 00:17:21.503 "digest": "sha256", 00:17:21.503 "dhgroup": "ffdhe2048" 00:17:21.503 } 00:17:21.503 } 00:17:21.503 ]' 00:17:21.503 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.762 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.762 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.762 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.762 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.762 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.762 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.762 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.020 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.958 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.527 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.527 { 00:17:23.527 "cntlid": 11, 00:17:23.527 "qid": 0, 00:17:23.527 "state": "enabled", 00:17:23.527 "thread": "nvmf_tgt_poll_group_000", 00:17:23.527 "listen_address": { 00:17:23.527 "trtype": "TCP", 00:17:23.527 "adrfam": "IPv4", 00:17:23.527 "traddr": "10.0.0.2", 00:17:23.527 "trsvcid": "4420" 00:17:23.527 }, 00:17:23.527 "peer_address": { 00:17:23.527 "trtype": "TCP", 00:17:23.527 "adrfam": "IPv4", 00:17:23.527 "traddr": "10.0.0.1", 00:17:23.527 "trsvcid": "52506" 00:17:23.527 }, 00:17:23.527 "auth": { 00:17:23.527 "state": "completed", 00:17:23.527 "digest": "sha256", 00:17:23.527 "dhgroup": "ffdhe2048" 00:17:23.527 } 00:17:23.527 } 00:17:23.527 ]' 00:17:23.527 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.786 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.107 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.676 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.935 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.503 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.503 { 00:17:25.503 "cntlid": 13, 00:17:25.503 "qid": 0, 00:17:25.503 "state": "enabled", 00:17:25.503 "thread": "nvmf_tgt_poll_group_000", 00:17:25.503 "listen_address": { 00:17:25.503 "trtype": "TCP", 00:17:25.503 "adrfam": "IPv4", 00:17:25.503 "traddr": "10.0.0.2", 00:17:25.503 "trsvcid": "4420" 00:17:25.503 }, 00:17:25.503 "peer_address": { 00:17:25.503 "trtype": "TCP", 00:17:25.503 "adrfam": "IPv4", 00:17:25.503 "traddr": "10.0.0.1", 00:17:25.503 "trsvcid": "52526" 00:17:25.503 }, 00:17:25.503 "auth": { 00:17:25.503 "state": "completed", 00:17:25.503 "digest": "sha256", 00:17:25.503 "dhgroup": "ffdhe2048" 00:17:25.503 } 00:17:25.503 } 00:17:25.503 ]' 00:17:25.503 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.762 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.021 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.958 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.217 00:17:27.476 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.476 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.476 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.734 { 00:17:27.734 "cntlid": 15, 00:17:27.734 "qid": 0, 00:17:27.734 "state": "enabled", 00:17:27.734 "thread": "nvmf_tgt_poll_group_000", 00:17:27.734 "listen_address": { 00:17:27.734 "trtype": "TCP", 00:17:27.734 "adrfam": "IPv4", 00:17:27.734 "traddr": "10.0.0.2", 00:17:27.734 "trsvcid": "4420" 00:17:27.734 }, 00:17:27.734 "peer_address": { 00:17:27.734 "trtype": "TCP", 00:17:27.734 "adrfam": "IPv4", 00:17:27.734 "traddr": "10.0.0.1", 00:17:27.734 "trsvcid": "52556" 00:17:27.734 }, 00:17:27.734 "auth": { 00:17:27.734 "state": "completed", 00:17:27.734 "digest": "sha256", 00:17:27.734 "dhgroup": "ffdhe2048" 00:17:27.734 } 00:17:27.734 } 00:17:27.734 ]' 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.734 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.734 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.993 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.930 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.499 00:17:29.499 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.499 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.499 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.757 { 00:17:29.757 "cntlid": 17, 00:17:29.757 "qid": 0, 00:17:29.757 "state": "enabled", 00:17:29.757 "thread": "nvmf_tgt_poll_group_000", 00:17:29.757 "listen_address": { 00:17:29.757 "trtype": "TCP", 00:17:29.757 "adrfam": "IPv4", 00:17:29.757 "traddr": "10.0.0.2", 00:17:29.757 "trsvcid": "4420" 00:17:29.757 }, 00:17:29.757 "peer_address": { 00:17:29.757 "trtype": "TCP", 00:17:29.757 "adrfam": "IPv4", 00:17:29.757 "traddr": "10.0.0.1", 00:17:29.757 "trsvcid": "51392" 00:17:29.757 }, 00:17:29.757 "auth": { 00:17:29.757 "state": "completed", 00:17:29.757 "digest": "sha256", 00:17:29.757 "dhgroup": "ffdhe3072" 00:17:29.757 } 00:17:29.757 } 00:17:29.757 ]' 00:17:29.757 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.757 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.015 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.952 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.211 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.778 00:17:31.778 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.778 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.778 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.037 { 00:17:32.037 "cntlid": 19, 00:17:32.037 "qid": 0, 00:17:32.037 "state": "enabled", 00:17:32.037 "thread": "nvmf_tgt_poll_group_000", 00:17:32.037 "listen_address": { 00:17:32.037 "trtype": "TCP", 00:17:32.037 "adrfam": "IPv4", 00:17:32.037 "traddr": "10.0.0.2", 00:17:32.037 "trsvcid": "4420" 00:17:32.037 }, 00:17:32.037 "peer_address": { 00:17:32.037 "trtype": "TCP", 00:17:32.037 "adrfam": "IPv4", 00:17:32.037 "traddr": "10.0.0.1", 00:17:32.037 "trsvcid": "51412" 00:17:32.037 }, 00:17:32.037 "auth": { 00:17:32.037 "state": "completed", 00:17:32.037 "digest": "sha256", 00:17:32.037 "dhgroup": "ffdhe3072" 00:17:32.037 } 00:17:32.037 } 00:17:32.037 ]' 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.037 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.295 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:33.233 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.233 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.234 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.493 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.752 00:17:33.752 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.752 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.753 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.012 { 00:17:34.012 "cntlid": 21, 00:17:34.012 "qid": 0, 00:17:34.012 "state": "enabled", 00:17:34.012 "thread": "nvmf_tgt_poll_group_000", 00:17:34.012 "listen_address": { 00:17:34.012 "trtype": "TCP", 00:17:34.012 "adrfam": "IPv4", 00:17:34.012 "traddr": "10.0.0.2", 00:17:34.012 "trsvcid": "4420" 00:17:34.012 }, 00:17:34.012 "peer_address": { 00:17:34.012 "trtype": "TCP", 00:17:34.012 "adrfam": "IPv4", 00:17:34.012 "traddr": "10.0.0.1", 00:17:34.012 "trsvcid": "51432" 00:17:34.012 }, 00:17:34.012 "auth": { 00:17:34.012 "state": "completed", 00:17:34.012 "digest": "sha256", 00:17:34.012 "dhgroup": "ffdhe3072" 00:17:34.012 } 00:17:34.012 } 00:17:34.012 ]' 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.012 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.271 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.271 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.271 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.530 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.099 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.358 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.618 00:17:35.618 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.618 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.618 11:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.900 { 00:17:35.900 "cntlid": 23, 00:17:35.900 "qid": 0, 00:17:35.900 "state": "enabled", 00:17:35.900 "thread": "nvmf_tgt_poll_group_000", 00:17:35.900 "listen_address": { 00:17:35.900 "trtype": "TCP", 00:17:35.900 "adrfam": "IPv4", 00:17:35.900 "traddr": "10.0.0.2", 00:17:35.900 "trsvcid": "4420" 00:17:35.900 }, 00:17:35.900 "peer_address": { 00:17:35.900 "trtype": "TCP", 00:17:35.900 "adrfam": "IPv4", 00:17:35.900 "traddr": "10.0.0.1", 00:17:35.900 "trsvcid": "51470" 00:17:35.900 }, 00:17:35.900 "auth": { 00:17:35.900 "state": "completed", 00:17:35.900 "digest": "sha256", 00:17:35.900 "dhgroup": "ffdhe3072" 00:17:35.900 } 00:17:35.900 } 00:17:35.900 ]' 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.900 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.159 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.159 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.159 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.159 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.159 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.418 11:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:17:36.986 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.986 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:36.986 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.986 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.245 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.505 11:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.505 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.505 11:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.764 00:17:37.764 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.764 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.764 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.023 { 00:17:38.023 "cntlid": 25, 00:17:38.023 "qid": 0, 00:17:38.023 "state": "enabled", 00:17:38.023 "thread": "nvmf_tgt_poll_group_000", 00:17:38.023 "listen_address": { 00:17:38.023 "trtype": "TCP", 00:17:38.023 "adrfam": "IPv4", 00:17:38.023 "traddr": "10.0.0.2", 00:17:38.023 "trsvcid": "4420" 00:17:38.023 }, 00:17:38.023 "peer_address": { 00:17:38.023 "trtype": "TCP", 00:17:38.023 "adrfam": "IPv4", 00:17:38.023 "traddr": "10.0.0.1", 00:17:38.023 "trsvcid": "37416" 00:17:38.023 }, 00:17:38.023 "auth": { 00:17:38.023 "state": "completed", 00:17:38.023 "digest": "sha256", 00:17:38.023 "dhgroup": "ffdhe4096" 00:17:38.023 } 00:17:38.023 } 00:17:38.023 ]' 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.023 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.282 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.282 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.282 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.541 11:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.108 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.367 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.626 00:17:39.626 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.626 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.626 11:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.886 { 00:17:39.886 "cntlid": 27, 00:17:39.886 "qid": 0, 00:17:39.886 "state": "enabled", 00:17:39.886 "thread": "nvmf_tgt_poll_group_000", 00:17:39.886 "listen_address": { 00:17:39.886 "trtype": "TCP", 00:17:39.886 "adrfam": "IPv4", 00:17:39.886 "traddr": "10.0.0.2", 00:17:39.886 "trsvcid": "4420" 00:17:39.886 }, 00:17:39.886 "peer_address": { 00:17:39.886 "trtype": "TCP", 00:17:39.886 "adrfam": "IPv4", 00:17:39.886 "traddr": "10.0.0.1", 00:17:39.886 "trsvcid": "37432" 00:17:39.886 }, 00:17:39.886 "auth": { 00:17:39.886 "state": "completed", 00:17:39.886 "digest": "sha256", 00:17:39.886 "dhgroup": "ffdhe4096" 00:17:39.886 } 00:17:39.886 } 00:17:39.886 ]' 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.886 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.145 11:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:41.082 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.367 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.368 11:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.626 00:17:41.884 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.884 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.884 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.142 { 00:17:42.142 "cntlid": 29, 00:17:42.142 "qid": 0, 00:17:42.142 "state": "enabled", 00:17:42.142 "thread": "nvmf_tgt_poll_group_000", 00:17:42.142 "listen_address": { 00:17:42.142 "trtype": "TCP", 00:17:42.142 "adrfam": "IPv4", 00:17:42.142 "traddr": "10.0.0.2", 00:17:42.142 "trsvcid": "4420" 00:17:42.142 }, 00:17:42.142 "peer_address": { 00:17:42.142 "trtype": "TCP", 00:17:42.142 "adrfam": "IPv4", 00:17:42.142 "traddr": "10.0.0.1", 00:17:42.142 "trsvcid": "37466" 00:17:42.142 }, 00:17:42.142 "auth": { 00:17:42.142 "state": "completed", 00:17:42.142 "digest": "sha256", 00:17:42.142 "dhgroup": "ffdhe4096" 00:17:42.142 } 00:17:42.142 } 00:17:42.142 ]' 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.142 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.401 11:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.775 11:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.776 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.033 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.033 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.033 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.291 00:17:44.291 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.291 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.291 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.550 { 00:17:44.550 "cntlid": 31, 00:17:44.550 "qid": 0, 00:17:44.550 "state": "enabled", 00:17:44.550 "thread": "nvmf_tgt_poll_group_000", 00:17:44.550 "listen_address": { 00:17:44.550 "trtype": "TCP", 00:17:44.550 "adrfam": "IPv4", 00:17:44.550 "traddr": "10.0.0.2", 00:17:44.550 "trsvcid": "4420" 00:17:44.550 }, 00:17:44.550 "peer_address": { 00:17:44.550 "trtype": "TCP", 00:17:44.550 "adrfam": "IPv4", 00:17:44.550 "traddr": "10.0.0.1", 00:17:44.550 "trsvcid": "37488" 00:17:44.550 }, 00:17:44.550 "auth": { 00:17:44.550 "state": "completed", 00:17:44.550 "digest": "sha256", 00:17:44.550 "dhgroup": "ffdhe4096" 00:17:44.550 } 00:17:44.550 } 00:17:44.550 ]' 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.550 11:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.550 11:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.550 11:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.550 11:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.809 11:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.745 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.004 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.572 00:17:46.572 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.572 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.572 11:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.830 { 00:17:46.830 "cntlid": 33, 00:17:46.830 "qid": 0, 00:17:46.830 "state": "enabled", 00:17:46.830 "thread": "nvmf_tgt_poll_group_000", 00:17:46.830 "listen_address": { 00:17:46.830 "trtype": "TCP", 00:17:46.830 "adrfam": "IPv4", 00:17:46.830 "traddr": "10.0.0.2", 00:17:46.830 "trsvcid": "4420" 00:17:46.830 }, 00:17:46.830 "peer_address": { 00:17:46.830 "trtype": "TCP", 00:17:46.830 "adrfam": "IPv4", 00:17:46.830 "traddr": "10.0.0.1", 00:17:46.830 "trsvcid": "37516" 00:17:46.830 }, 00:17:46.830 "auth": { 00:17:46.830 "state": "completed", 00:17:46.830 "digest": "sha256", 00:17:46.830 "dhgroup": "ffdhe6144" 00:17:46.830 } 00:17:46.830 } 00:17:46.830 ]' 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.830 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.089 11:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.026 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.287 11:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.930 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.930 { 00:17:48.930 "cntlid": 35, 00:17:48.930 "qid": 0, 00:17:48.930 "state": "enabled", 00:17:48.930 "thread": "nvmf_tgt_poll_group_000", 00:17:48.930 "listen_address": { 00:17:48.930 "trtype": "TCP", 00:17:48.930 "adrfam": "IPv4", 00:17:48.930 "traddr": "10.0.0.2", 00:17:48.930 "trsvcid": "4420" 00:17:48.930 }, 00:17:48.930 "peer_address": { 00:17:48.930 "trtype": "TCP", 00:17:48.930 "adrfam": "IPv4", 00:17:48.930 "traddr": "10.0.0.1", 00:17:48.930 "trsvcid": "50768" 00:17:48.930 }, 00:17:48.930 "auth": { 00:17:48.930 "state": "completed", 00:17:48.930 "digest": "sha256", 00:17:48.930 "dhgroup": "ffdhe6144" 00:17:48.930 } 00:17:48.930 } 00:17:48.930 ]' 00:17:48.930 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.192 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.451 11:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.388 11:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.955 00:17:50.955 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.955 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.955 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.214 { 00:17:51.214 "cntlid": 37, 00:17:51.214 "qid": 0, 00:17:51.214 "state": "enabled", 00:17:51.214 "thread": "nvmf_tgt_poll_group_000", 00:17:51.214 "listen_address": { 00:17:51.214 "trtype": "TCP", 00:17:51.214 "adrfam": "IPv4", 00:17:51.214 "traddr": "10.0.0.2", 00:17:51.214 "trsvcid": "4420" 00:17:51.214 }, 00:17:51.214 "peer_address": { 00:17:51.214 "trtype": "TCP", 00:17:51.214 "adrfam": "IPv4", 00:17:51.214 "traddr": "10.0.0.1", 00:17:51.214 "trsvcid": "50796" 00:17:51.214 }, 00:17:51.214 "auth": { 00:17:51.214 "state": "completed", 00:17:51.214 "digest": "sha256", 00:17:51.214 "dhgroup": "ffdhe6144" 00:17:51.214 } 00:17:51.214 } 00:17:51.214 ]' 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.214 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.474 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.474 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.474 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.474 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.474 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.732 11:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.668 11:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.668 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.237 00:17:53.237 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.237 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.237 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.496 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.496 { 00:17:53.496 "cntlid": 39, 00:17:53.496 "qid": 0, 00:17:53.496 "state": "enabled", 00:17:53.496 "thread": "nvmf_tgt_poll_group_000", 00:17:53.496 "listen_address": { 00:17:53.496 "trtype": "TCP", 00:17:53.496 "adrfam": "IPv4", 00:17:53.496 "traddr": "10.0.0.2", 00:17:53.496 "trsvcid": "4420" 00:17:53.496 }, 00:17:53.496 "peer_address": { 00:17:53.496 "trtype": "TCP", 00:17:53.496 "adrfam": "IPv4", 00:17:53.496 "traddr": "10.0.0.1", 00:17:53.497 "trsvcid": "50836" 00:17:53.497 }, 00:17:53.497 "auth": { 00:17:53.497 "state": "completed", 00:17:53.497 "digest": "sha256", 00:17:53.497 "dhgroup": "ffdhe6144" 00:17:53.497 } 00:17:53.497 } 00:17:53.497 ]' 00:17:53.497 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.497 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.497 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.497 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.497 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.756 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.756 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.756 11:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.016 11:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:17:54.583 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.844 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.103 11:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.671 00:17:55.671 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.671 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.671 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.930 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.930 { 00:17:55.930 "cntlid": 41, 00:17:55.930 "qid": 0, 00:17:55.930 "state": "enabled", 00:17:55.930 "thread": "nvmf_tgt_poll_group_000", 00:17:55.930 "listen_address": { 00:17:55.930 "trtype": "TCP", 00:17:55.930 "adrfam": "IPv4", 00:17:55.930 "traddr": "10.0.0.2", 00:17:55.930 "trsvcid": "4420" 00:17:55.930 }, 00:17:55.930 "peer_address": { 00:17:55.930 "trtype": "TCP", 00:17:55.930 "adrfam": "IPv4", 00:17:55.930 "traddr": "10.0.0.1", 00:17:55.930 "trsvcid": "50876" 00:17:55.930 }, 00:17:55.930 "auth": { 00:17:55.930 "state": "completed", 00:17:55.930 "digest": "sha256", 00:17:55.930 "dhgroup": "ffdhe8192" 00:17:55.930 } 00:17:55.930 } 00:17:55.930 ]' 00:17:55.931 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.931 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.931 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.189 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.189 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.189 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.189 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.189 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.448 11:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.385 11:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.323 00:17:58.323 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.323 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.323 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.582 { 00:17:58.582 "cntlid": 43, 00:17:58.582 "qid": 0, 00:17:58.582 "state": "enabled", 00:17:58.582 "thread": "nvmf_tgt_poll_group_000", 00:17:58.582 "listen_address": { 00:17:58.582 "trtype": "TCP", 00:17:58.582 "adrfam": "IPv4", 00:17:58.582 "traddr": "10.0.0.2", 00:17:58.582 "trsvcid": "4420" 00:17:58.582 }, 00:17:58.582 "peer_address": { 00:17:58.582 "trtype": "TCP", 00:17:58.582 "adrfam": "IPv4", 00:17:58.582 "traddr": "10.0.0.1", 00:17:58.582 "trsvcid": "33688" 00:17:58.582 }, 00:17:58.582 "auth": { 00:17:58.582 "state": "completed", 00:17:58.582 "digest": "sha256", 00:17:58.582 "dhgroup": "ffdhe8192" 00:17:58.582 } 00:17:58.582 } 00:17:58.582 ]' 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.582 11:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.840 11:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:17:59.774 11:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.774 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:59.774 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.774 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.775 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.775 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.775 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.033 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.034 11:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.602 00:18:00.602 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.602 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.602 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.861 { 00:18:00.861 "cntlid": 45, 00:18:00.861 "qid": 0, 00:18:00.861 "state": "enabled", 00:18:00.861 "thread": "nvmf_tgt_poll_group_000", 00:18:00.861 "listen_address": { 00:18:00.861 "trtype": "TCP", 00:18:00.861 "adrfam": "IPv4", 00:18:00.861 "traddr": "10.0.0.2", 00:18:00.861 "trsvcid": "4420" 00:18:00.861 }, 00:18:00.861 "peer_address": { 00:18:00.861 "trtype": "TCP", 00:18:00.861 "adrfam": "IPv4", 00:18:00.861 "traddr": "10.0.0.1", 00:18:00.861 "trsvcid": "33708" 00:18:00.861 }, 00:18:00.861 "auth": { 00:18:00.861 "state": "completed", 00:18:00.861 "digest": "sha256", 00:18:00.861 "dhgroup": "ffdhe8192" 00:18:00.861 } 00:18:00.861 } 00:18:00.861 ]' 00:18:00.861 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.120 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.379 11:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.314 11:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.250 00:18:03.250 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.250 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.250 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.509 { 00:18:03.509 "cntlid": 47, 00:18:03.509 "qid": 0, 00:18:03.509 "state": "enabled", 00:18:03.509 "thread": "nvmf_tgt_poll_group_000", 00:18:03.509 "listen_address": { 00:18:03.509 "trtype": "TCP", 00:18:03.509 "adrfam": "IPv4", 00:18:03.509 "traddr": "10.0.0.2", 00:18:03.509 "trsvcid": "4420" 00:18:03.509 }, 00:18:03.509 "peer_address": { 00:18:03.509 "trtype": "TCP", 00:18:03.509 "adrfam": "IPv4", 00:18:03.509 "traddr": "10.0.0.1", 00:18:03.509 "trsvcid": "33722" 00:18:03.509 }, 00:18:03.509 "auth": { 00:18:03.509 "state": "completed", 00:18:03.509 "digest": "sha256", 00:18:03.509 "dhgroup": "ffdhe8192" 00:18:03.509 } 00:18:03.509 } 00:18:03.509 ]' 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.509 11:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.767 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.702 11:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.961 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.220 00:18:05.220 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.220 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.220 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.478 { 00:18:05.478 "cntlid": 49, 00:18:05.478 "qid": 0, 00:18:05.478 "state": "enabled", 00:18:05.478 "thread": "nvmf_tgt_poll_group_000", 00:18:05.478 "listen_address": { 00:18:05.478 "trtype": "TCP", 00:18:05.478 "adrfam": "IPv4", 00:18:05.478 "traddr": "10.0.0.2", 00:18:05.478 "trsvcid": "4420" 00:18:05.478 }, 00:18:05.478 "peer_address": { 00:18:05.478 "trtype": "TCP", 00:18:05.478 "adrfam": "IPv4", 00:18:05.478 "traddr": "10.0.0.1", 00:18:05.478 "trsvcid": "33732" 00:18:05.478 }, 00:18:05.478 "auth": { 00:18:05.478 "state": "completed", 00:18:05.478 "digest": "sha384", 00:18:05.478 "dhgroup": "null" 00:18:05.478 } 00:18:05.478 } 00:18:05.478 ]' 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.478 11:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.736 11:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.114 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.373 00:18:07.373 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.373 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.373 11:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.632 { 00:18:07.632 "cntlid": 51, 00:18:07.632 "qid": 0, 00:18:07.632 "state": "enabled", 00:18:07.632 "thread": "nvmf_tgt_poll_group_000", 00:18:07.632 "listen_address": { 00:18:07.632 "trtype": "TCP", 00:18:07.632 "adrfam": "IPv4", 00:18:07.632 "traddr": "10.0.0.2", 00:18:07.632 "trsvcid": "4420" 00:18:07.632 }, 00:18:07.632 "peer_address": { 00:18:07.632 "trtype": "TCP", 00:18:07.632 "adrfam": "IPv4", 00:18:07.632 "traddr": "10.0.0.1", 00:18:07.632 "trsvcid": "33750" 00:18:07.632 }, 00:18:07.632 "auth": { 00:18:07.632 "state": "completed", 00:18:07.632 "digest": "sha384", 00:18:07.632 "dhgroup": "null" 00:18:07.632 } 00:18:07.632 } 00:18:07.632 ]' 00:18:07.632 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.891 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.150 11:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.086 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.344 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.345 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.345 11:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.345 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.345 11:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.604 00:18:09.604 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.604 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.604 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.863 { 00:18:09.863 "cntlid": 53, 00:18:09.863 "qid": 0, 00:18:09.863 "state": "enabled", 00:18:09.863 "thread": "nvmf_tgt_poll_group_000", 00:18:09.863 "listen_address": { 00:18:09.863 "trtype": "TCP", 00:18:09.863 "adrfam": "IPv4", 00:18:09.863 "traddr": "10.0.0.2", 00:18:09.863 "trsvcid": "4420" 00:18:09.863 }, 00:18:09.863 "peer_address": { 00:18:09.863 "trtype": "TCP", 00:18:09.863 "adrfam": "IPv4", 00:18:09.863 "traddr": "10.0.0.1", 00:18:09.863 "trsvcid": "60066" 00:18:09.863 }, 00:18:09.863 "auth": { 00:18:09.863 "state": "completed", 00:18:09.863 "digest": "sha384", 00:18:09.863 "dhgroup": "null" 00:18:09.863 } 00:18:09.863 } 00:18:09.863 ]' 00:18:09.863 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.122 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.382 11:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.414 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.705 00:18:11.705 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.705 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.705 11:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.007 { 00:18:12.007 "cntlid": 55, 00:18:12.007 "qid": 0, 00:18:12.007 "state": "enabled", 00:18:12.007 "thread": "nvmf_tgt_poll_group_000", 00:18:12.007 "listen_address": { 00:18:12.007 "trtype": "TCP", 00:18:12.007 "adrfam": "IPv4", 00:18:12.007 "traddr": "10.0.0.2", 00:18:12.007 "trsvcid": "4420" 00:18:12.007 }, 00:18:12.007 "peer_address": { 00:18:12.007 "trtype": "TCP", 00:18:12.007 "adrfam": "IPv4", 00:18:12.007 "traddr": "10.0.0.1", 00:18:12.007 "trsvcid": "60076" 00:18:12.007 }, 00:18:12.007 "auth": { 00:18:12.007 "state": "completed", 00:18:12.007 "digest": "sha384", 00:18:12.007 "dhgroup": "null" 00:18:12.007 } 00:18:12.007 } 00:18:12.007 ]' 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.007 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.266 11:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.202 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.461 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.461 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:13.461 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.462 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.720 11:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.720 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.720 11:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.979 00:18:13.979 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.979 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.979 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.236 { 00:18:14.236 "cntlid": 57, 00:18:14.236 "qid": 0, 00:18:14.236 "state": "enabled", 00:18:14.236 "thread": "nvmf_tgt_poll_group_000", 00:18:14.236 "listen_address": { 00:18:14.236 "trtype": "TCP", 00:18:14.236 "adrfam": "IPv4", 00:18:14.236 "traddr": "10.0.0.2", 00:18:14.236 "trsvcid": "4420" 00:18:14.236 }, 00:18:14.236 "peer_address": { 00:18:14.236 "trtype": "TCP", 00:18:14.236 "adrfam": "IPv4", 00:18:14.236 "traddr": "10.0.0.1", 00:18:14.236 "trsvcid": "60108" 00:18:14.236 }, 00:18:14.236 "auth": { 00:18:14.236 "state": "completed", 00:18:14.236 "digest": "sha384", 00:18:14.236 "dhgroup": "ffdhe2048" 00:18:14.236 } 00:18:14.236 } 00:18:14.236 ]' 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.236 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.494 11:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:15.431 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.431 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.431 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.431 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.432 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.432 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.432 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.691 11:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.950 00:18:15.950 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.950 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.950 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.209 { 00:18:16.209 "cntlid": 59, 00:18:16.209 "qid": 0, 00:18:16.209 "state": "enabled", 00:18:16.209 "thread": "nvmf_tgt_poll_group_000", 00:18:16.209 "listen_address": { 00:18:16.209 "trtype": "TCP", 00:18:16.209 "adrfam": "IPv4", 00:18:16.209 "traddr": "10.0.0.2", 00:18:16.209 "trsvcid": "4420" 00:18:16.209 }, 00:18:16.209 "peer_address": { 00:18:16.209 "trtype": "TCP", 00:18:16.209 "adrfam": "IPv4", 00:18:16.209 "traddr": "10.0.0.1", 00:18:16.209 "trsvcid": "60120" 00:18:16.209 }, 00:18:16.209 "auth": { 00:18:16.209 "state": "completed", 00:18:16.209 "digest": "sha384", 00:18:16.209 "dhgroup": "ffdhe2048" 00:18:16.209 } 00:18:16.209 } 00:18:16.209 ]' 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.209 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.469 11:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.406 11:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.665 00:18:17.665 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.665 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.665 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.923 { 00:18:17.923 "cntlid": 61, 00:18:17.923 "qid": 0, 00:18:17.923 "state": "enabled", 00:18:17.923 "thread": "nvmf_tgt_poll_group_000", 00:18:17.923 "listen_address": { 00:18:17.923 "trtype": "TCP", 00:18:17.923 "adrfam": "IPv4", 00:18:17.923 "traddr": "10.0.0.2", 00:18:17.923 "trsvcid": "4420" 00:18:17.923 }, 00:18:17.923 "peer_address": { 00:18:17.923 "trtype": "TCP", 00:18:17.923 "adrfam": "IPv4", 00:18:17.923 "traddr": "10.0.0.1", 00:18:17.923 "trsvcid": "52624" 00:18:17.923 }, 00:18:17.923 "auth": { 00:18:17.923 "state": "completed", 00:18:17.923 "digest": "sha384", 00:18:17.923 "dhgroup": "ffdhe2048" 00:18:17.923 } 00:18:17.923 } 00:18:17.923 ]' 00:18:17.923 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.182 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.441 11:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.378 11:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.947 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.947 { 00:18:19.947 "cntlid": 63, 00:18:19.947 "qid": 0, 00:18:19.947 "state": "enabled", 00:18:19.947 "thread": "nvmf_tgt_poll_group_000", 00:18:19.947 "listen_address": { 00:18:19.947 "trtype": "TCP", 00:18:19.947 "adrfam": "IPv4", 00:18:19.947 "traddr": "10.0.0.2", 00:18:19.947 "trsvcid": "4420" 00:18:19.947 }, 00:18:19.947 "peer_address": { 00:18:19.947 "trtype": "TCP", 00:18:19.947 "adrfam": "IPv4", 00:18:19.947 "traddr": "10.0.0.1", 00:18:19.947 "trsvcid": "52654" 00:18:19.947 }, 00:18:19.947 "auth": { 00:18:19.947 "state": "completed", 00:18:19.947 "digest": "sha384", 00:18:19.947 "dhgroup": "ffdhe2048" 00:18:19.947 } 00:18:19.947 } 00:18:19.947 ]' 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.947 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.206 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.206 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.206 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.206 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.206 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.465 11:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.402 11:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.970 00:18:21.970 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.970 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.970 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.970 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.228 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.228 11:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.228 11:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.229 { 00:18:22.229 "cntlid": 65, 00:18:22.229 "qid": 0, 00:18:22.229 "state": "enabled", 00:18:22.229 "thread": "nvmf_tgt_poll_group_000", 00:18:22.229 "listen_address": { 00:18:22.229 "trtype": "TCP", 00:18:22.229 "adrfam": "IPv4", 00:18:22.229 "traddr": "10.0.0.2", 00:18:22.229 "trsvcid": "4420" 00:18:22.229 }, 00:18:22.229 "peer_address": { 00:18:22.229 "trtype": "TCP", 00:18:22.229 "adrfam": "IPv4", 00:18:22.229 "traddr": "10.0.0.1", 00:18:22.229 "trsvcid": "52676" 00:18:22.229 }, 00:18:22.229 "auth": { 00:18:22.229 "state": "completed", 00:18:22.229 "digest": "sha384", 00:18:22.229 "dhgroup": "ffdhe3072" 00:18:22.229 } 00:18:22.229 } 00:18:22.229 ]' 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.229 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.486 11:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.420 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.421 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.421 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.678 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.679 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.679 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.679 11:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.679 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.679 11:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.937 00:18:23.937 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.937 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.937 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.195 { 00:18:24.195 "cntlid": 67, 00:18:24.195 "qid": 0, 00:18:24.195 "state": "enabled", 00:18:24.195 "thread": "nvmf_tgt_poll_group_000", 00:18:24.195 "listen_address": { 00:18:24.195 "trtype": "TCP", 00:18:24.195 "adrfam": "IPv4", 00:18:24.195 "traddr": "10.0.0.2", 00:18:24.195 "trsvcid": "4420" 00:18:24.195 }, 00:18:24.195 "peer_address": { 00:18:24.195 "trtype": "TCP", 00:18:24.195 "adrfam": "IPv4", 00:18:24.195 "traddr": "10.0.0.1", 00:18:24.195 "trsvcid": "52708" 00:18:24.195 }, 00:18:24.195 "auth": { 00:18:24.195 "state": "completed", 00:18:24.195 "digest": "sha384", 00:18:24.195 "dhgroup": "ffdhe3072" 00:18:24.195 } 00:18:24.195 } 00:18:24.195 ]' 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.195 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.453 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.388 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.647 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.906 00:18:25.906 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.906 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.906 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.165 { 00:18:26.165 "cntlid": 69, 00:18:26.165 "qid": 0, 00:18:26.165 "state": "enabled", 00:18:26.165 "thread": "nvmf_tgt_poll_group_000", 00:18:26.165 "listen_address": { 00:18:26.165 "trtype": "TCP", 00:18:26.165 "adrfam": "IPv4", 00:18:26.165 "traddr": "10.0.0.2", 00:18:26.165 "trsvcid": "4420" 00:18:26.165 }, 00:18:26.165 "peer_address": { 00:18:26.165 "trtype": "TCP", 00:18:26.165 "adrfam": "IPv4", 00:18:26.165 "traddr": "10.0.0.1", 00:18:26.165 "trsvcid": "52754" 00:18:26.165 }, 00:18:26.165 "auth": { 00:18:26.165 "state": "completed", 00:18:26.165 "digest": "sha384", 00:18:26.165 "dhgroup": "ffdhe3072" 00:18:26.165 } 00:18:26.165 } 00:18:26.165 ]' 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.165 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.424 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.424 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.424 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.424 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.424 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.683 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:27.620 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.621 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.621 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.196 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.196 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.196 { 00:18:28.196 "cntlid": 71, 00:18:28.196 "qid": 0, 00:18:28.196 "state": "enabled", 00:18:28.196 "thread": "nvmf_tgt_poll_group_000", 00:18:28.196 "listen_address": { 00:18:28.196 "trtype": "TCP", 00:18:28.196 "adrfam": "IPv4", 00:18:28.196 "traddr": "10.0.0.2", 00:18:28.196 "trsvcid": "4420" 00:18:28.196 }, 00:18:28.196 "peer_address": { 00:18:28.196 "trtype": "TCP", 00:18:28.196 "adrfam": "IPv4", 00:18:28.196 "traddr": "10.0.0.1", 00:18:28.196 "trsvcid": "39886" 00:18:28.196 }, 00:18:28.196 "auth": { 00:18:28.196 "state": "completed", 00:18:28.196 "digest": "sha384", 00:18:28.196 "dhgroup": "ffdhe3072" 00:18:28.196 } 00:18:28.196 } 00:18:28.196 ]' 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.456 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.715 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.677 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.677 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.936 00:18:30.195 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.195 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.195 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.454 { 00:18:30.454 "cntlid": 73, 00:18:30.454 "qid": 0, 00:18:30.454 "state": "enabled", 00:18:30.454 "thread": "nvmf_tgt_poll_group_000", 00:18:30.454 "listen_address": { 00:18:30.454 "trtype": "TCP", 00:18:30.454 "adrfam": "IPv4", 00:18:30.454 "traddr": "10.0.0.2", 00:18:30.454 "trsvcid": "4420" 00:18:30.454 }, 00:18:30.454 "peer_address": { 00:18:30.454 "trtype": "TCP", 00:18:30.454 "adrfam": "IPv4", 00:18:30.454 "traddr": "10.0.0.1", 00:18:30.454 "trsvcid": "39920" 00:18:30.454 }, 00:18:30.454 "auth": { 00:18:30.454 "state": "completed", 00:18:30.454 "digest": "sha384", 00:18:30.454 "dhgroup": "ffdhe4096" 00:18:30.454 } 00:18:30.454 } 00:18:30.454 ]' 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.454 11:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.713 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.650 11:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.909 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.168 00:18:32.168 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.168 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.168 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.427 { 00:18:32.427 "cntlid": 75, 00:18:32.427 "qid": 0, 00:18:32.427 "state": "enabled", 00:18:32.427 "thread": "nvmf_tgt_poll_group_000", 00:18:32.427 "listen_address": { 00:18:32.427 "trtype": "TCP", 00:18:32.427 "adrfam": "IPv4", 00:18:32.427 "traddr": "10.0.0.2", 00:18:32.427 "trsvcid": "4420" 00:18:32.427 }, 00:18:32.427 "peer_address": { 00:18:32.427 "trtype": "TCP", 00:18:32.427 "adrfam": "IPv4", 00:18:32.427 "traddr": "10.0.0.1", 00:18:32.427 "trsvcid": "39952" 00:18:32.427 }, 00:18:32.427 "auth": { 00:18:32.427 "state": "completed", 00:18:32.427 "digest": "sha384", 00:18:32.427 "dhgroup": "ffdhe4096" 00:18:32.427 } 00:18:32.427 } 00:18:32.427 ]' 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.427 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.686 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.686 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.686 11:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.945 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:33.513 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.772 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.773 11:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.773 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.340 00:18:34.340 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.340 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.340 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.340 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.598 { 00:18:34.598 "cntlid": 77, 00:18:34.598 "qid": 0, 00:18:34.598 "state": "enabled", 00:18:34.598 "thread": "nvmf_tgt_poll_group_000", 00:18:34.598 "listen_address": { 00:18:34.598 "trtype": "TCP", 00:18:34.598 "adrfam": "IPv4", 00:18:34.598 "traddr": "10.0.0.2", 00:18:34.598 "trsvcid": "4420" 00:18:34.598 }, 00:18:34.598 "peer_address": { 00:18:34.598 "trtype": "TCP", 00:18:34.598 "adrfam": "IPv4", 00:18:34.598 "traddr": "10.0.0.1", 00:18:34.598 "trsvcid": "39986" 00:18:34.598 }, 00:18:34.598 "auth": { 00:18:34.598 "state": "completed", 00:18:34.598 "digest": "sha384", 00:18:34.598 "dhgroup": "ffdhe4096" 00:18:34.598 } 00:18:34.598 } 00:18:34.598 ]' 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.598 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.857 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.793 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.051 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:36.051 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.051 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.051 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.051 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.052 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.619 00:18:36.619 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.619 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.619 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.877 { 00:18:36.877 "cntlid": 79, 00:18:36.877 "qid": 0, 00:18:36.877 "state": "enabled", 00:18:36.877 "thread": "nvmf_tgt_poll_group_000", 00:18:36.877 "listen_address": { 00:18:36.877 "trtype": "TCP", 00:18:36.877 "adrfam": "IPv4", 00:18:36.877 "traddr": "10.0.0.2", 00:18:36.877 "trsvcid": "4420" 00:18:36.877 }, 00:18:36.877 "peer_address": { 00:18:36.877 "trtype": "TCP", 00:18:36.877 "adrfam": "IPv4", 00:18:36.877 "traddr": "10.0.0.1", 00:18:36.877 "trsvcid": "40014" 00:18:36.877 }, 00:18:36.877 "auth": { 00:18:36.877 "state": "completed", 00:18:36.877 "digest": "sha384", 00:18:36.877 "dhgroup": "ffdhe4096" 00:18:36.877 } 00:18:36.877 } 00:18:36.877 ]' 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.877 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.135 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.071 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.330 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.902 00:18:38.902 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.902 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.902 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.902 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.902 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.903 { 00:18:38.903 "cntlid": 81, 00:18:38.903 "qid": 0, 00:18:38.903 "state": "enabled", 00:18:38.903 "thread": "nvmf_tgt_poll_group_000", 00:18:38.903 "listen_address": { 00:18:38.903 "trtype": "TCP", 00:18:38.903 "adrfam": "IPv4", 00:18:38.903 "traddr": "10.0.0.2", 00:18:38.903 "trsvcid": "4420" 00:18:38.903 }, 00:18:38.903 "peer_address": { 00:18:38.903 "trtype": "TCP", 00:18:38.903 "adrfam": "IPv4", 00:18:38.903 "traddr": "10.0.0.1", 00:18:38.903 "trsvcid": "45766" 00:18:38.903 }, 00:18:38.903 "auth": { 00:18:38.903 "state": "completed", 00:18:38.903 "digest": "sha384", 00:18:38.903 "dhgroup": "ffdhe6144" 00:18:38.903 } 00:18:38.903 } 00:18:38.903 ]' 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.903 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.162 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.162 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.162 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.162 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.162 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.422 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.359 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.618 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.186 00:18:41.186 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.186 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.186 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.445 { 00:18:41.445 "cntlid": 83, 00:18:41.445 "qid": 0, 00:18:41.445 "state": "enabled", 00:18:41.445 "thread": "nvmf_tgt_poll_group_000", 00:18:41.445 "listen_address": { 00:18:41.445 "trtype": "TCP", 00:18:41.445 "adrfam": "IPv4", 00:18:41.445 "traddr": "10.0.0.2", 00:18:41.445 "trsvcid": "4420" 00:18:41.445 }, 00:18:41.445 "peer_address": { 00:18:41.445 "trtype": "TCP", 00:18:41.445 "adrfam": "IPv4", 00:18:41.445 "traddr": "10.0.0.1", 00:18:41.445 "trsvcid": "45796" 00:18:41.445 }, 00:18:41.445 "auth": { 00:18:41.445 "state": "completed", 00:18:41.445 "digest": "sha384", 00:18:41.445 "dhgroup": "ffdhe6144" 00:18:41.445 } 00:18:41.445 } 00:18:41.445 ]' 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.445 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.703 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.077 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.644 00:18:43.644 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.644 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.644 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.903 { 00:18:43.903 "cntlid": 85, 00:18:43.903 "qid": 0, 00:18:43.903 "state": "enabled", 00:18:43.903 "thread": "nvmf_tgt_poll_group_000", 00:18:43.903 "listen_address": { 00:18:43.903 "trtype": "TCP", 00:18:43.903 "adrfam": "IPv4", 00:18:43.903 "traddr": "10.0.0.2", 00:18:43.903 "trsvcid": "4420" 00:18:43.903 }, 00:18:43.903 "peer_address": { 00:18:43.903 "trtype": "TCP", 00:18:43.903 "adrfam": "IPv4", 00:18:43.903 "traddr": "10.0.0.1", 00:18:43.903 "trsvcid": "45808" 00:18:43.903 }, 00:18:43.903 "auth": { 00:18:43.903 "state": "completed", 00:18:43.903 "digest": "sha384", 00:18:43.903 "dhgroup": "ffdhe6144" 00:18:43.903 } 00:18:43.903 } 00:18:43.903 ]' 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.903 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.161 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.161 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.161 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.161 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.161 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.419 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.986 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.245 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.820 00:18:45.820 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.820 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.820 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.078 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.078 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.079 { 00:18:46.079 "cntlid": 87, 00:18:46.079 "qid": 0, 00:18:46.079 "state": "enabled", 00:18:46.079 "thread": "nvmf_tgt_poll_group_000", 00:18:46.079 "listen_address": { 00:18:46.079 "trtype": "TCP", 00:18:46.079 "adrfam": "IPv4", 00:18:46.079 "traddr": "10.0.0.2", 00:18:46.079 "trsvcid": "4420" 00:18:46.079 }, 00:18:46.079 "peer_address": { 00:18:46.079 "trtype": "TCP", 00:18:46.079 "adrfam": "IPv4", 00:18:46.079 "traddr": "10.0.0.1", 00:18:46.079 "trsvcid": "45834" 00:18:46.079 }, 00:18:46.079 "auth": { 00:18:46.079 "state": "completed", 00:18:46.079 "digest": "sha384", 00:18:46.079 "dhgroup": "ffdhe6144" 00:18:46.079 } 00:18:46.079 } 00:18:46.079 ]' 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.079 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.337 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.272 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.531 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.098 00:18:48.098 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.098 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.098 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.357 { 00:18:48.357 "cntlid": 89, 00:18:48.357 "qid": 0, 00:18:48.357 "state": "enabled", 00:18:48.357 "thread": "nvmf_tgt_poll_group_000", 00:18:48.357 "listen_address": { 00:18:48.357 "trtype": "TCP", 00:18:48.357 "adrfam": "IPv4", 00:18:48.357 "traddr": "10.0.0.2", 00:18:48.357 "trsvcid": "4420" 00:18:48.357 }, 00:18:48.357 "peer_address": { 00:18:48.357 "trtype": "TCP", 00:18:48.357 "adrfam": "IPv4", 00:18:48.357 "traddr": "10.0.0.1", 00:18:48.357 "trsvcid": "43840" 00:18:48.357 }, 00:18:48.357 "auth": { 00:18:48.357 "state": "completed", 00:18:48.357 "digest": "sha384", 00:18:48.357 "dhgroup": "ffdhe8192" 00:18:48.357 } 00:18:48.357 } 00:18:48.357 ]' 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.357 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.964 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:49.554 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.555 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.814 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.750 00:18:50.750 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.750 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.750 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.750 { 00:18:50.750 "cntlid": 91, 00:18:50.750 "qid": 0, 00:18:50.750 "state": "enabled", 00:18:50.750 "thread": "nvmf_tgt_poll_group_000", 00:18:50.750 "listen_address": { 00:18:50.750 "trtype": "TCP", 00:18:50.750 "adrfam": "IPv4", 00:18:50.750 "traddr": "10.0.0.2", 00:18:50.750 "trsvcid": "4420" 00:18:50.750 }, 00:18:50.750 "peer_address": { 00:18:50.750 "trtype": "TCP", 00:18:50.750 "adrfam": "IPv4", 00:18:50.750 "traddr": "10.0.0.1", 00:18:50.750 "trsvcid": "43854" 00:18:50.750 }, 00:18:50.750 "auth": { 00:18:50.750 "state": "completed", 00:18:50.750 "digest": "sha384", 00:18:50.750 "dhgroup": "ffdhe8192" 00:18:50.750 } 00:18:50.750 } 00:18:50.750 ]' 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.750 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.008 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.008 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.008 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.008 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.008 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.266 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.202 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.150 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.150 { 00:18:53.150 "cntlid": 93, 00:18:53.150 "qid": 0, 00:18:53.150 "state": "enabled", 00:18:53.150 "thread": "nvmf_tgt_poll_group_000", 00:18:53.150 "listen_address": { 00:18:53.150 "trtype": "TCP", 00:18:53.150 "adrfam": "IPv4", 00:18:53.150 "traddr": "10.0.0.2", 00:18:53.150 "trsvcid": "4420" 00:18:53.150 }, 00:18:53.150 "peer_address": { 00:18:53.150 "trtype": "TCP", 00:18:53.150 "adrfam": "IPv4", 00:18:53.150 "traddr": "10.0.0.1", 00:18:53.150 "trsvcid": "43876" 00:18:53.150 }, 00:18:53.150 "auth": { 00:18:53.150 "state": "completed", 00:18:53.150 "digest": "sha384", 00:18:53.150 "dhgroup": "ffdhe8192" 00:18:53.150 } 00:18:53.150 } 00:18:53.150 ]' 00:18:53.150 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.410 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.410 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.410 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.410 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.410 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.411 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.411 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.669 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.605 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.606 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.865 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.865 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.865 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.433 00:18:55.433 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.433 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.433 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.692 { 00:18:55.692 "cntlid": 95, 00:18:55.692 "qid": 0, 00:18:55.692 "state": "enabled", 00:18:55.692 "thread": "nvmf_tgt_poll_group_000", 00:18:55.692 "listen_address": { 00:18:55.692 "trtype": "TCP", 00:18:55.692 "adrfam": "IPv4", 00:18:55.692 "traddr": "10.0.0.2", 00:18:55.692 "trsvcid": "4420" 00:18:55.692 }, 00:18:55.692 "peer_address": { 00:18:55.692 "trtype": "TCP", 00:18:55.692 "adrfam": "IPv4", 00:18:55.692 "traddr": "10.0.0.1", 00:18:55.692 "trsvcid": "43896" 00:18:55.692 }, 00:18:55.692 "auth": { 00:18:55.692 "state": "completed", 00:18:55.692 "digest": "sha384", 00:18:55.692 "dhgroup": "ffdhe8192" 00:18:55.692 } 00:18:55.692 } 00:18:55.692 ]' 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.692 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.950 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.950 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.951 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.951 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.951 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.210 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.146 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.147 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.405 00:18:57.665 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.665 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.665 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.924 { 00:18:57.924 "cntlid": 97, 00:18:57.924 "qid": 0, 00:18:57.924 "state": "enabled", 00:18:57.924 "thread": "nvmf_tgt_poll_group_000", 00:18:57.924 "listen_address": { 00:18:57.924 "trtype": "TCP", 00:18:57.924 "adrfam": "IPv4", 00:18:57.924 "traddr": "10.0.0.2", 00:18:57.924 "trsvcid": "4420" 00:18:57.924 }, 00:18:57.924 "peer_address": { 00:18:57.924 "trtype": "TCP", 00:18:57.924 "adrfam": "IPv4", 00:18:57.924 "traddr": "10.0.0.1", 00:18:57.924 "trsvcid": "54708" 00:18:57.924 }, 00:18:57.924 "auth": { 00:18:57.924 "state": "completed", 00:18:57.924 "digest": "sha512", 00:18:57.924 "dhgroup": "null" 00:18:57.924 } 00:18:57.924 } 00:18:57.924 ]' 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.924 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.183 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.120 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.379 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.637 00:18:59.637 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.637 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.638 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.896 { 00:18:59.896 "cntlid": 99, 00:18:59.896 "qid": 0, 00:18:59.896 "state": "enabled", 00:18:59.896 "thread": "nvmf_tgt_poll_group_000", 00:18:59.896 "listen_address": { 00:18:59.896 "trtype": "TCP", 00:18:59.896 "adrfam": "IPv4", 00:18:59.896 "traddr": "10.0.0.2", 00:18:59.896 "trsvcid": "4420" 00:18:59.896 }, 00:18:59.896 "peer_address": { 00:18:59.896 "trtype": "TCP", 00:18:59.896 "adrfam": "IPv4", 00:18:59.896 "traddr": "10.0.0.1", 00:18:59.896 "trsvcid": "54740" 00:18:59.896 }, 00:18:59.896 "auth": { 00:18:59.896 "state": "completed", 00:18:59.896 "digest": "sha512", 00:18:59.896 "dhgroup": "null" 00:18:59.896 } 00:18:59.896 } 00:18:59.896 ]' 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.896 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.155 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.533 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.792 00:19:01.792 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.792 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.792 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.051 { 00:19:02.051 "cntlid": 101, 00:19:02.051 "qid": 0, 00:19:02.051 "state": "enabled", 00:19:02.051 "thread": "nvmf_tgt_poll_group_000", 00:19:02.051 "listen_address": { 00:19:02.051 "trtype": "TCP", 00:19:02.051 "adrfam": "IPv4", 00:19:02.051 "traddr": "10.0.0.2", 00:19:02.051 "trsvcid": "4420" 00:19:02.051 }, 00:19:02.051 "peer_address": { 00:19:02.051 "trtype": "TCP", 00:19:02.051 "adrfam": "IPv4", 00:19:02.051 "traddr": "10.0.0.1", 00:19:02.051 "trsvcid": "54772" 00:19:02.051 }, 00:19:02.051 "auth": { 00:19:02.051 "state": "completed", 00:19:02.051 "digest": "sha512", 00:19:02.051 "dhgroup": "null" 00:19:02.051 } 00:19:02.051 } 00:19:02.051 ]' 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.051 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.310 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.310 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.310 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.569 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.507 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.766 00:19:03.766 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.766 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.766 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.025 { 00:19:04.025 "cntlid": 103, 00:19:04.025 "qid": 0, 00:19:04.025 "state": "enabled", 00:19:04.025 "thread": "nvmf_tgt_poll_group_000", 00:19:04.025 "listen_address": { 00:19:04.025 "trtype": "TCP", 00:19:04.025 "adrfam": "IPv4", 00:19:04.025 "traddr": "10.0.0.2", 00:19:04.025 "trsvcid": "4420" 00:19:04.025 }, 00:19:04.025 "peer_address": { 00:19:04.025 "trtype": "TCP", 00:19:04.025 "adrfam": "IPv4", 00:19:04.025 "traddr": "10.0.0.1", 00:19:04.025 "trsvcid": "54800" 00:19:04.025 }, 00:19:04.025 "auth": { 00:19:04.025 "state": "completed", 00:19:04.025 "digest": "sha512", 00:19:04.025 "dhgroup": "null" 00:19:04.025 } 00:19:04.025 } 00:19:04.025 ]' 00:19:04.025 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.284 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.542 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.478 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.479 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.737 00:19:05.996 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.996 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.996 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.255 { 00:19:06.255 "cntlid": 105, 00:19:06.255 "qid": 0, 00:19:06.255 "state": "enabled", 00:19:06.255 "thread": "nvmf_tgt_poll_group_000", 00:19:06.255 "listen_address": { 00:19:06.255 "trtype": "TCP", 00:19:06.255 "adrfam": "IPv4", 00:19:06.255 "traddr": "10.0.0.2", 00:19:06.255 "trsvcid": "4420" 00:19:06.255 }, 00:19:06.255 "peer_address": { 00:19:06.255 "trtype": "TCP", 00:19:06.255 "adrfam": "IPv4", 00:19:06.255 "traddr": "10.0.0.1", 00:19:06.255 "trsvcid": "54822" 00:19:06.255 }, 00:19:06.255 "auth": { 00:19:06.255 "state": "completed", 00:19:06.255 "digest": "sha512", 00:19:06.255 "dhgroup": "ffdhe2048" 00:19:06.255 } 00:19:06.255 } 00:19:06.255 ]' 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.255 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.514 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.451 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.709 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.276 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.276 { 00:19:08.276 "cntlid": 107, 00:19:08.276 "qid": 0, 00:19:08.276 "state": "enabled", 00:19:08.276 "thread": "nvmf_tgt_poll_group_000", 00:19:08.276 "listen_address": { 00:19:08.276 "trtype": "TCP", 00:19:08.276 "adrfam": "IPv4", 00:19:08.276 "traddr": "10.0.0.2", 00:19:08.276 "trsvcid": "4420" 00:19:08.276 }, 00:19:08.276 "peer_address": { 00:19:08.276 "trtype": "TCP", 00:19:08.276 "adrfam": "IPv4", 00:19:08.276 "traddr": "10.0.0.1", 00:19:08.276 "trsvcid": "57768" 00:19:08.276 }, 00:19:08.276 "auth": { 00:19:08.276 "state": "completed", 00:19:08.276 "digest": "sha512", 00:19:08.276 "dhgroup": "ffdhe2048" 00:19:08.276 } 00:19:08.276 } 00:19:08.276 ]' 00:19:08.276 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.534 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.792 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.726 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.985 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.243 00:19:10.243 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.243 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.243 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.502 { 00:19:10.502 "cntlid": 109, 00:19:10.502 "qid": 0, 00:19:10.502 "state": "enabled", 00:19:10.502 "thread": "nvmf_tgt_poll_group_000", 00:19:10.502 "listen_address": { 00:19:10.502 "trtype": "TCP", 00:19:10.502 "adrfam": "IPv4", 00:19:10.502 "traddr": "10.0.0.2", 00:19:10.502 "trsvcid": "4420" 00:19:10.502 }, 00:19:10.502 "peer_address": { 00:19:10.502 "trtype": "TCP", 00:19:10.502 "adrfam": "IPv4", 00:19:10.502 "traddr": "10.0.0.1", 00:19:10.502 "trsvcid": "57804" 00:19:10.502 }, 00:19:10.502 "auth": { 00:19:10.502 "state": "completed", 00:19:10.502 "digest": "sha512", 00:19:10.502 "dhgroup": "ffdhe2048" 00:19:10.502 } 00:19:10.502 } 00:19:10.502 ]' 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.502 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.760 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.760 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.760 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.760 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.760 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.017 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.393 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.652 00:19:12.652 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.652 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.652 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.910 { 00:19:12.910 "cntlid": 111, 00:19:12.910 "qid": 0, 00:19:12.910 "state": "enabled", 00:19:12.910 "thread": "nvmf_tgt_poll_group_000", 00:19:12.910 "listen_address": { 00:19:12.910 "trtype": "TCP", 00:19:12.910 "adrfam": "IPv4", 00:19:12.910 "traddr": "10.0.0.2", 00:19:12.910 "trsvcid": "4420" 00:19:12.910 }, 00:19:12.910 "peer_address": { 00:19:12.910 "trtype": "TCP", 00:19:12.910 "adrfam": "IPv4", 00:19:12.910 "traddr": "10.0.0.1", 00:19:12.910 "trsvcid": "57832" 00:19:12.910 }, 00:19:12.910 "auth": { 00:19:12.910 "state": "completed", 00:19:12.910 "digest": "sha512", 00:19:12.910 "dhgroup": "ffdhe2048" 00:19:12.910 } 00:19:12.910 } 00:19:12.910 ]' 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.910 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.168 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.168 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.168 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.426 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:13.994 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.994 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:13.994 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.994 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:14.253 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.513 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.513 00:19:14.772 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.772 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.772 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.034 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.034 { 00:19:15.034 "cntlid": 113, 00:19:15.034 "qid": 0, 00:19:15.034 "state": "enabled", 00:19:15.034 "thread": "nvmf_tgt_poll_group_000", 00:19:15.034 "listen_address": { 00:19:15.034 "trtype": "TCP", 00:19:15.034 "adrfam": "IPv4", 00:19:15.034 "traddr": "10.0.0.2", 00:19:15.034 "trsvcid": "4420" 00:19:15.034 }, 00:19:15.034 "peer_address": { 00:19:15.034 "trtype": "TCP", 00:19:15.035 "adrfam": "IPv4", 00:19:15.035 "traddr": "10.0.0.1", 00:19:15.035 "trsvcid": "57870" 00:19:15.035 }, 00:19:15.035 "auth": { 00:19:15.035 "state": "completed", 00:19:15.035 "digest": "sha512", 00:19:15.035 "dhgroup": "ffdhe3072" 00:19:15.035 } 00:19:15.035 } 00:19:15.035 ]' 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.035 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.294 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.234 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.515 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:16.515 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.516 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.811 00:19:16.811 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.811 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.811 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.076 { 00:19:17.076 "cntlid": 115, 00:19:17.076 "qid": 0, 00:19:17.076 "state": "enabled", 00:19:17.076 "thread": "nvmf_tgt_poll_group_000", 00:19:17.076 "listen_address": { 00:19:17.076 "trtype": "TCP", 00:19:17.076 "adrfam": "IPv4", 00:19:17.076 "traddr": "10.0.0.2", 00:19:17.076 "trsvcid": "4420" 00:19:17.076 }, 00:19:17.076 "peer_address": { 00:19:17.076 "trtype": "TCP", 00:19:17.076 "adrfam": "IPv4", 00:19:17.076 "traddr": "10.0.0.1", 00:19:17.076 "trsvcid": "57886" 00:19:17.076 }, 00:19:17.076 "auth": { 00:19:17.076 "state": "completed", 00:19:17.076 "digest": "sha512", 00:19:17.076 "dhgroup": "ffdhe3072" 00:19:17.076 } 00:19:17.076 } 00:19:17.076 ]' 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.076 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.336 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.275 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.534 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.534 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.534 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.102 00:19:19.102 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.102 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.102 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.361 { 00:19:19.361 "cntlid": 117, 00:19:19.361 "qid": 0, 00:19:19.361 "state": "enabled", 00:19:19.361 "thread": "nvmf_tgt_poll_group_000", 00:19:19.361 "listen_address": { 00:19:19.361 "trtype": "TCP", 00:19:19.361 "adrfam": "IPv4", 00:19:19.361 "traddr": "10.0.0.2", 00:19:19.361 "trsvcid": "4420" 00:19:19.361 }, 00:19:19.361 "peer_address": { 00:19:19.361 "trtype": "TCP", 00:19:19.361 "adrfam": "IPv4", 00:19:19.361 "traddr": "10.0.0.1", 00:19:19.361 "trsvcid": "56098" 00:19:19.361 }, 00:19:19.361 "auth": { 00:19:19.361 "state": "completed", 00:19:19.361 "digest": "sha512", 00:19:19.361 "dhgroup": "ffdhe3072" 00:19:19.361 } 00:19:19.361 } 00:19:19.361 ]' 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.361 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.620 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.555 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.122 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.123 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.123 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.382 00:19:21.382 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.382 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.382 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.642 { 00:19:21.642 "cntlid": 119, 00:19:21.642 "qid": 0, 00:19:21.642 "state": "enabled", 00:19:21.642 "thread": "nvmf_tgt_poll_group_000", 00:19:21.642 "listen_address": { 00:19:21.642 "trtype": "TCP", 00:19:21.642 "adrfam": "IPv4", 00:19:21.642 "traddr": "10.0.0.2", 00:19:21.642 "trsvcid": "4420" 00:19:21.642 }, 00:19:21.642 "peer_address": { 00:19:21.642 "trtype": "TCP", 00:19:21.642 "adrfam": "IPv4", 00:19:21.642 "traddr": "10.0.0.1", 00:19:21.642 "trsvcid": "56120" 00:19:21.642 }, 00:19:21.642 "auth": { 00:19:21.642 "state": "completed", 00:19:21.642 "digest": "sha512", 00:19:21.642 "dhgroup": "ffdhe3072" 00:19:21.642 } 00:19:21.642 } 00:19:21.642 ]' 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.642 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.642 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.642 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.642 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.642 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.642 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.901 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.837 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.095 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.096 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.096 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.096 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.661 00:19:23.662 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.662 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.662 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.920 { 00:19:23.920 "cntlid": 121, 00:19:23.920 "qid": 0, 00:19:23.920 "state": "enabled", 00:19:23.920 "thread": "nvmf_tgt_poll_group_000", 00:19:23.920 "listen_address": { 00:19:23.920 "trtype": "TCP", 00:19:23.920 "adrfam": "IPv4", 00:19:23.920 "traddr": "10.0.0.2", 00:19:23.920 "trsvcid": "4420" 00:19:23.920 }, 00:19:23.920 "peer_address": { 00:19:23.920 "trtype": "TCP", 00:19:23.920 "adrfam": "IPv4", 00:19:23.920 "traddr": "10.0.0.1", 00:19:23.920 "trsvcid": "56146" 00:19:23.920 }, 00:19:23.920 "auth": { 00:19:23.920 "state": "completed", 00:19:23.920 "digest": "sha512", 00:19:23.920 "dhgroup": "ffdhe4096" 00:19:23.920 } 00:19:23.920 } 00:19:23.920 ]' 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.920 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.178 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.178 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.178 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.436 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:25.372 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.630 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:25.630 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.630 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.630 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.630 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.631 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.631 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.889 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.147 00:19:26.147 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.147 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.147 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.405 { 00:19:26.405 "cntlid": 123, 00:19:26.405 "qid": 0, 00:19:26.405 "state": "enabled", 00:19:26.405 "thread": "nvmf_tgt_poll_group_000", 00:19:26.405 "listen_address": { 00:19:26.405 "trtype": "TCP", 00:19:26.405 "adrfam": "IPv4", 00:19:26.405 "traddr": "10.0.0.2", 00:19:26.405 "trsvcid": "4420" 00:19:26.405 }, 00:19:26.405 "peer_address": { 00:19:26.405 "trtype": "TCP", 00:19:26.405 "adrfam": "IPv4", 00:19:26.405 "traddr": "10.0.0.1", 00:19:26.405 "trsvcid": "56182" 00:19:26.405 }, 00:19:26.405 "auth": { 00:19:26.405 "state": "completed", 00:19:26.405 "digest": "sha512", 00:19:26.405 "dhgroup": "ffdhe4096" 00:19:26.405 } 00:19:26.405 } 00:19:26.405 ]' 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.405 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.664 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.664 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.664 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.921 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.860 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.118 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.118 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.118 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.685 00:19:28.686 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.686 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.686 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.960 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.960 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.961 { 00:19:28.961 "cntlid": 125, 00:19:28.961 "qid": 0, 00:19:28.961 "state": "enabled", 00:19:28.961 "thread": "nvmf_tgt_poll_group_000", 00:19:28.961 "listen_address": { 00:19:28.961 "trtype": "TCP", 00:19:28.961 "adrfam": "IPv4", 00:19:28.961 "traddr": "10.0.0.2", 00:19:28.961 "trsvcid": "4420" 00:19:28.961 }, 00:19:28.961 "peer_address": { 00:19:28.961 "trtype": "TCP", 00:19:28.961 "adrfam": "IPv4", 00:19:28.961 "traddr": "10.0.0.1", 00:19:28.961 "trsvcid": "58872" 00:19:28.961 }, 00:19:28.961 "auth": { 00:19:28.961 "state": "completed", 00:19:28.961 "digest": "sha512", 00:19:28.961 "dhgroup": "ffdhe4096" 00:19:28.961 } 00:19:28.961 } 00:19:28.961 ]' 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.961 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.962 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.962 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.962 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.224 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:30.160 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.161 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.726 00:19:30.726 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.726 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.726 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.726 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.726 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.726 11:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.726 11:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.984 { 00:19:30.984 "cntlid": 127, 00:19:30.984 "qid": 0, 00:19:30.984 "state": "enabled", 00:19:30.984 "thread": "nvmf_tgt_poll_group_000", 00:19:30.984 "listen_address": { 00:19:30.984 "trtype": "TCP", 00:19:30.984 "adrfam": "IPv4", 00:19:30.984 "traddr": "10.0.0.2", 00:19:30.984 "trsvcid": "4420" 00:19:30.984 }, 00:19:30.984 "peer_address": { 00:19:30.984 "trtype": "TCP", 00:19:30.984 "adrfam": "IPv4", 00:19:30.984 "traddr": "10.0.0.1", 00:19:30.984 "trsvcid": "58896" 00:19:30.984 }, 00:19:30.984 "auth": { 00:19:30.984 "state": "completed", 00:19:30.984 "digest": "sha512", 00:19:30.984 "dhgroup": "ffdhe4096" 00:19:30.984 } 00:19:30.984 } 00:19:30.984 ]' 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.984 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.242 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.185 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.443 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.443 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.443 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.029 00:19:33.029 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.029 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.029 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.287 { 00:19:33.287 "cntlid": 129, 00:19:33.287 "qid": 0, 00:19:33.287 "state": "enabled", 00:19:33.287 "thread": "nvmf_tgt_poll_group_000", 00:19:33.287 "listen_address": { 00:19:33.287 "trtype": "TCP", 00:19:33.287 "adrfam": "IPv4", 00:19:33.287 "traddr": "10.0.0.2", 00:19:33.287 "trsvcid": "4420" 00:19:33.287 }, 00:19:33.287 "peer_address": { 00:19:33.287 "trtype": "TCP", 00:19:33.287 "adrfam": "IPv4", 00:19:33.287 "traddr": "10.0.0.1", 00:19:33.287 "trsvcid": "58934" 00:19:33.287 }, 00:19:33.287 "auth": { 00:19:33.287 "state": "completed", 00:19:33.287 "digest": "sha512", 00:19:33.287 "dhgroup": "ffdhe6144" 00:19:33.287 } 00:19:33.287 } 00:19:33.287 ]' 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.287 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.544 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.478 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.045 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.611 00:19:35.611 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.611 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.611 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.870 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.870 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.870 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.870 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.129 { 00:19:36.129 "cntlid": 131, 00:19:36.129 "qid": 0, 00:19:36.129 "state": "enabled", 00:19:36.129 "thread": "nvmf_tgt_poll_group_000", 00:19:36.129 "listen_address": { 00:19:36.129 "trtype": "TCP", 00:19:36.129 "adrfam": "IPv4", 00:19:36.129 "traddr": "10.0.0.2", 00:19:36.129 "trsvcid": "4420" 00:19:36.129 }, 00:19:36.129 "peer_address": { 00:19:36.129 "trtype": "TCP", 00:19:36.129 "adrfam": "IPv4", 00:19:36.129 "traddr": "10.0.0.1", 00:19:36.129 "trsvcid": "58948" 00:19:36.129 }, 00:19:36.129 "auth": { 00:19:36.129 "state": "completed", 00:19:36.129 "digest": "sha512", 00:19:36.129 "dhgroup": "ffdhe6144" 00:19:36.129 } 00:19:36.129 } 00:19:36.129 ]' 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.129 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.388 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:37.323 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.581 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.147 00:19:38.147 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.147 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.147 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.405 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.405 { 00:19:38.405 "cntlid": 133, 00:19:38.405 "qid": 0, 00:19:38.405 "state": "enabled", 00:19:38.405 "thread": "nvmf_tgt_poll_group_000", 00:19:38.405 "listen_address": { 00:19:38.405 "trtype": "TCP", 00:19:38.405 "adrfam": "IPv4", 00:19:38.405 "traddr": "10.0.0.2", 00:19:38.405 "trsvcid": "4420" 00:19:38.405 }, 00:19:38.405 "peer_address": { 00:19:38.405 "trtype": "TCP", 00:19:38.405 "adrfam": "IPv4", 00:19:38.405 "traddr": "10.0.0.1", 00:19:38.405 "trsvcid": "49250" 00:19:38.405 }, 00:19:38.405 "auth": { 00:19:38.405 "state": "completed", 00:19:38.405 "digest": "sha512", 00:19:38.405 "dhgroup": "ffdhe6144" 00:19:38.405 } 00:19:38.405 } 00:19:38.406 ]' 00:19:38.406 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.406 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.406 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.664 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.664 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.664 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.664 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.664 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.934 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:39.869 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.870 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.128 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.064 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.064 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.323 { 00:19:41.323 "cntlid": 135, 00:19:41.323 "qid": 0, 00:19:41.323 "state": "enabled", 00:19:41.323 "thread": "nvmf_tgt_poll_group_000", 00:19:41.323 "listen_address": { 00:19:41.323 "trtype": "TCP", 00:19:41.323 "adrfam": "IPv4", 00:19:41.323 "traddr": "10.0.0.2", 00:19:41.323 "trsvcid": "4420" 00:19:41.323 }, 00:19:41.323 "peer_address": { 00:19:41.323 "trtype": "TCP", 00:19:41.323 "adrfam": "IPv4", 00:19:41.323 "traddr": "10.0.0.1", 00:19:41.323 "trsvcid": "49278" 00:19:41.323 }, 00:19:41.323 "auth": { 00:19:41.323 "state": "completed", 00:19:41.323 "digest": "sha512", 00:19:41.323 "dhgroup": "ffdhe6144" 00:19:41.323 } 00:19:41.323 } 00:19:41.323 ]' 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.323 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.582 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.516 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.774 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.710 00:19:43.710 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.710 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.710 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.989 { 00:19:43.989 "cntlid": 137, 00:19:43.989 "qid": 0, 00:19:43.989 "state": "enabled", 00:19:43.989 "thread": "nvmf_tgt_poll_group_000", 00:19:43.989 "listen_address": { 00:19:43.989 "trtype": "TCP", 00:19:43.989 "adrfam": "IPv4", 00:19:43.989 "traddr": "10.0.0.2", 00:19:43.989 "trsvcid": "4420" 00:19:43.989 }, 00:19:43.989 "peer_address": { 00:19:43.989 "trtype": "TCP", 00:19:43.989 "adrfam": "IPv4", 00:19:43.989 "traddr": "10.0.0.1", 00:19:43.989 "trsvcid": "49310" 00:19:43.989 }, 00:19:43.989 "auth": { 00:19:43.989 "state": "completed", 00:19:43.989 "digest": "sha512", 00:19:43.989 "dhgroup": "ffdhe8192" 00:19:43.989 } 00:19:43.989 } 00:19:43.989 ]' 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.989 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.247 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.181 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.439 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.375 00:19:46.375 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.375 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.375 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.633 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.633 { 00:19:46.633 "cntlid": 139, 00:19:46.633 "qid": 0, 00:19:46.633 "state": "enabled", 00:19:46.633 "thread": "nvmf_tgt_poll_group_000", 00:19:46.633 "listen_address": { 00:19:46.633 "trtype": "TCP", 00:19:46.633 "adrfam": "IPv4", 00:19:46.633 "traddr": "10.0.0.2", 00:19:46.633 "trsvcid": "4420" 00:19:46.633 }, 00:19:46.633 "peer_address": { 00:19:46.633 "trtype": "TCP", 00:19:46.633 "adrfam": "IPv4", 00:19:46.633 "traddr": "10.0.0.1", 00:19:46.633 "trsvcid": "49322" 00:19:46.633 }, 00:19:46.633 "auth": { 00:19:46.633 "state": "completed", 00:19:46.633 "digest": "sha512", 00:19:46.633 "dhgroup": "ffdhe8192" 00:19:46.633 } 00:19:46.633 } 00:19:46.633 ]' 00:19:46.633 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.633 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.633 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.633 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.633 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.891 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.891 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.891 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.458 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NjQ5ZjlmNDVlOThmMTExNzVkNWU2YjA5NjEzZTNiN2NUVF1z: --dhchap-ctrl-secret DHHC-1:02:MjA5ODkzZjMzNDE2OWY3YWQ0MWU4MTk3M2NmN2UwNGUzN2UwMjdiZjY5YzIxN2NjjBu1gg==: 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.025 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.298 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:48.298 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.298 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.299 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.269 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.269 { 00:19:49.269 "cntlid": 141, 00:19:49.269 "qid": 0, 00:19:49.269 "state": "enabled", 00:19:49.269 "thread": "nvmf_tgt_poll_group_000", 00:19:49.269 "listen_address": { 00:19:49.269 "trtype": "TCP", 00:19:49.269 "adrfam": "IPv4", 00:19:49.269 "traddr": "10.0.0.2", 00:19:49.269 "trsvcid": "4420" 00:19:49.269 }, 00:19:49.269 "peer_address": { 00:19:49.269 "trtype": "TCP", 00:19:49.269 "adrfam": "IPv4", 00:19:49.269 "traddr": "10.0.0.1", 00:19:49.269 "trsvcid": "35372" 00:19:49.269 }, 00:19:49.269 "auth": { 00:19:49.269 "state": "completed", 00:19:49.269 "digest": "sha512", 00:19:49.269 "dhgroup": "ffdhe8192" 00:19:49.269 } 00:19:49.269 } 00:19:49.269 ]' 00:19:49.269 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.528 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.786 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTRmZWM5NDQ2MmMzZTBkNjI5Nzk5YTkxYThiNmM5OWM3MDQ0NzQ0Yjg5Yjc5NWE4IA+76Q==: --dhchap-ctrl-secret DHHC-1:01:ODhhZGI0NzA0MjEyMzM3MjdlNWVlYzhkNzI4MDU0MDKlMAdD: 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.723 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.982 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.918 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.918 { 00:19:51.918 "cntlid": 143, 00:19:51.918 "qid": 0, 00:19:51.918 "state": "enabled", 00:19:51.918 "thread": "nvmf_tgt_poll_group_000", 00:19:51.918 "listen_address": { 00:19:51.918 "trtype": "TCP", 00:19:51.918 "adrfam": "IPv4", 00:19:51.918 "traddr": "10.0.0.2", 00:19:51.918 "trsvcid": "4420" 00:19:51.918 }, 00:19:51.918 "peer_address": { 00:19:51.918 "trtype": "TCP", 00:19:51.918 "adrfam": "IPv4", 00:19:51.918 "traddr": "10.0.0.1", 00:19:51.918 "trsvcid": "35400" 00:19:51.918 }, 00:19:51.918 "auth": { 00:19:51.918 "state": "completed", 00:19:51.918 "digest": "sha512", 00:19:51.918 "dhgroup": "ffdhe8192" 00:19:51.918 } 00:19:51.918 } 00:19:51.918 ]' 00:19:51.918 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.176 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.434 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.000 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.258 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.632 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.632 { 00:19:54.632 "cntlid": 145, 00:19:54.632 "qid": 0, 00:19:54.632 "state": "enabled", 00:19:54.632 "thread": "nvmf_tgt_poll_group_000", 00:19:54.632 "listen_address": { 00:19:54.632 "trtype": "TCP", 00:19:54.632 "adrfam": "IPv4", 00:19:54.632 "traddr": "10.0.0.2", 00:19:54.632 "trsvcid": "4420" 00:19:54.632 }, 00:19:54.632 "peer_address": { 00:19:54.632 "trtype": "TCP", 00:19:54.632 "adrfam": "IPv4", 00:19:54.632 "traddr": "10.0.0.1", 00:19:54.632 "trsvcid": "35418" 00:19:54.632 }, 00:19:54.632 "auth": { 00:19:54.632 "state": "completed", 00:19:54.632 "digest": "sha512", 00:19:54.632 "dhgroup": "ffdhe8192" 00:19:54.632 } 00:19:54.632 } 00:19:54.632 ]' 00:19:54.632 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.632 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.632 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.632 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.632 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.890 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.890 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.890 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.148 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzY3YTcwZDhkM2ZhMzIyNTA0NWI3YjcwZWM3OTIyZTQ5N2FkNWMwNTY1ODk5YTlid4de6A==: --dhchap-ctrl-secret DHHC-1:03:NDQ1NTM5NjdiYjM2ZGZlZjgyMWU3ZjE1YTc3N2M2NGQ2MzhjMTBlZWM0NmM3YTUzOGI3OTkzMGJiMTE0YzVhY5XLBos=: 00:19:55.714 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.973 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:56.539 request: 00:19:56.539 { 00:19:56.539 "name": "nvme0", 00:19:56.539 "trtype": "tcp", 00:19:56.539 "traddr": "10.0.0.2", 00:19:56.539 "adrfam": "ipv4", 00:19:56.539 "trsvcid": "4420", 00:19:56.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:56.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:56.539 "prchk_reftag": false, 00:19:56.539 "prchk_guard": false, 00:19:56.539 "hdgst": false, 00:19:56.539 "ddgst": false, 00:19:56.539 "dhchap_key": "key2", 00:19:56.539 "method": "bdev_nvme_attach_controller", 00:19:56.539 "req_id": 1 00:19:56.539 } 00:19:56.539 Got JSON-RPC error response 00:19:56.539 response: 00:19:56.539 { 00:19:56.539 "code": -5, 00:19:56.539 "message": "Input/output error" 00:19:56.539 } 00:19:56.539 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:56.539 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.539 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.539 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:56.540 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:57.476 request: 00:19:57.476 { 00:19:57.476 "name": "nvme0", 00:19:57.476 "trtype": "tcp", 00:19:57.476 "traddr": "10.0.0.2", 00:19:57.476 "adrfam": "ipv4", 00:19:57.476 "trsvcid": "4420", 00:19:57.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:57.476 "prchk_reftag": false, 00:19:57.476 "prchk_guard": false, 00:19:57.476 "hdgst": false, 00:19:57.476 "ddgst": false, 00:19:57.476 "dhchap_key": "key1", 00:19:57.476 "dhchap_ctrlr_key": "ckey2", 00:19:57.476 "method": "bdev_nvme_attach_controller", 00:19:57.476 "req_id": 1 00:19:57.476 } 00:19:57.476 Got JSON-RPC error response 00:19:57.476 response: 00:19:57.476 { 00:19:57.476 "code": -5, 00:19:57.476 "message": "Input/output error" 00:19:57.476 } 00:19:57.476 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.477 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.045 request: 00:19:58.045 { 00:19:58.045 "name": "nvme0", 00:19:58.045 "trtype": "tcp", 00:19:58.045 "traddr": "10.0.0.2", 00:19:58.045 "adrfam": "ipv4", 00:19:58.045 "trsvcid": "4420", 00:19:58.045 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:58.045 "prchk_reftag": false, 00:19:58.045 "prchk_guard": false, 00:19:58.045 "hdgst": false, 00:19:58.045 "ddgst": false, 00:19:58.045 "dhchap_key": "key1", 00:19:58.045 "dhchap_ctrlr_key": "ckey1", 00:19:58.045 "method": "bdev_nvme_attach_controller", 00:19:58.045 "req_id": 1 00:19:58.045 } 00:19:58.045 Got JSON-RPC error response 00:19:58.045 response: 00:19:58.045 { 00:19:58.045 "code": -5, 00:19:58.045 "message": "Input/output error" 00:19:58.045 } 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2783669 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2783669 ']' 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2783669 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783669 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783669' 00:19:58.045 killing process with pid 2783669 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2783669 00:19:58.045 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2783669 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2815070 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2815070 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2815070 ']' 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.305 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.240 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.240 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2815070 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2815070 ']' 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.241 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.499 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.499 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:59.499 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:59.499 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.499 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.757 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.323 00:20:00.323 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.323 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.323 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.582 { 00:20:00.582 "cntlid": 1, 00:20:00.582 "qid": 0, 00:20:00.582 "state": "enabled", 00:20:00.582 "thread": "nvmf_tgt_poll_group_000", 00:20:00.582 "listen_address": { 00:20:00.582 "trtype": "TCP", 00:20:00.582 "adrfam": "IPv4", 00:20:00.582 "traddr": "10.0.0.2", 00:20:00.582 "trsvcid": "4420" 00:20:00.582 }, 00:20:00.582 "peer_address": { 00:20:00.582 "trtype": "TCP", 00:20:00.582 "adrfam": "IPv4", 00:20:00.582 "traddr": "10.0.0.1", 00:20:00.582 "trsvcid": "54190" 00:20:00.582 }, 00:20:00.582 "auth": { 00:20:00.582 "state": "completed", 00:20:00.582 "digest": "sha512", 00:20:00.582 "dhgroup": "ffdhe8192" 00:20:00.582 } 00:20:00.582 } 00:20:00.582 ]' 00:20:00.582 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.840 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.097 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTFmOWEzNGYzNGExZDM3OWQ2NWZlNjg2YmM0MzdjMWUzYzZjZDU2MmI3ZWI4Y2JhZGRiYmQ1OTBlNThkM2RmNnccU+g=: 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.029 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:02.030 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.030 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.030 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.030 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:02.030 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.287 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.287 request: 00:20:02.287 { 00:20:02.287 "name": "nvme0", 00:20:02.287 "trtype": "tcp", 00:20:02.287 "traddr": "10.0.0.2", 00:20:02.287 "adrfam": "ipv4", 00:20:02.287 "trsvcid": "4420", 00:20:02.287 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:02.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:02.287 "prchk_reftag": false, 00:20:02.287 "prchk_guard": false, 00:20:02.287 "hdgst": false, 00:20:02.287 "ddgst": false, 00:20:02.287 "dhchap_key": "key3", 00:20:02.287 "method": "bdev_nvme_attach_controller", 00:20:02.287 "req_id": 1 00:20:02.287 } 00:20:02.287 Got JSON-RPC error response 00:20:02.287 response: 00:20:02.287 { 00:20:02.287 "code": -5, 00:20:02.287 "message": "Input/output error" 00:20:02.287 } 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:02.545 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:02.803 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.803 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.804 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.804 request: 00:20:02.804 { 00:20:02.804 "name": "nvme0", 00:20:02.804 "trtype": "tcp", 00:20:02.804 "traddr": "10.0.0.2", 00:20:02.804 "adrfam": "ipv4", 00:20:02.804 "trsvcid": "4420", 00:20:02.804 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:02.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:02.804 "prchk_reftag": false, 00:20:02.804 "prchk_guard": false, 00:20:02.804 "hdgst": false, 00:20:02.804 "ddgst": false, 00:20:02.804 "dhchap_key": "key3", 00:20:02.804 "method": "bdev_nvme_attach_controller", 00:20:02.804 "req_id": 1 00:20:02.804 } 00:20:02.804 Got JSON-RPC error response 00:20:02.804 response: 00:20:02.804 { 00:20:02.804 "code": -5, 00:20:02.804 "message": "Input/output error" 00:20:02.804 } 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.062 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.321 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.579 request: 00:20:03.579 { 00:20:03.579 "name": "nvme0", 00:20:03.579 "trtype": "tcp", 00:20:03.579 "traddr": "10.0.0.2", 00:20:03.579 "adrfam": "ipv4", 00:20:03.579 "trsvcid": "4420", 00:20:03.579 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:03.579 "prchk_reftag": false, 00:20:03.579 "prchk_guard": false, 00:20:03.579 "hdgst": false, 00:20:03.579 "ddgst": false, 00:20:03.579 "dhchap_key": "key0", 00:20:03.579 "dhchap_ctrlr_key": "key1", 00:20:03.579 "method": "bdev_nvme_attach_controller", 00:20:03.579 "req_id": 1 00:20:03.579 } 00:20:03.579 Got JSON-RPC error response 00:20:03.579 response: 00:20:03.579 { 00:20:03.579 "code": -5, 00:20:03.579 "message": "Input/output error" 00:20:03.579 } 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.579 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.837 00:20:03.837 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:03.837 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:03.838 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.404 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.404 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.404 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2783706 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2783706 ']' 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2783706 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783706 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783706' 00:20:04.970 killing process with pid 2783706 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2783706 00:20:04.970 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2783706 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.228 rmmod nvme_tcp 00:20:05.228 rmmod nvme_fabrics 00:20:05.228 rmmod nvme_keyring 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:05.228 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2815070 ']' 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2815070 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2815070 ']' 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2815070 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2815070 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2815070' 00:20:05.229 killing process with pid 2815070 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2815070 00:20:05.229 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2815070 00:20:05.487 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.487 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.487 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.487 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.488 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.488 11:35:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.488 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.488 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.023 11:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.023 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8KE /tmp/spdk.key-sha256.fPe /tmp/spdk.key-sha384.cWK /tmp/spdk.key-sha512.gMf /tmp/spdk.key-sha512.lxA /tmp/spdk.key-sha384.OP7 /tmp/spdk.key-sha256.VTP '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:08.023 00:20:08.023 real 3m3.969s 00:20:08.023 user 7m9.547s 00:20:08.023 sys 0m24.771s 00:20:08.023 11:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.023 11:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.023 ************************************ 00:20:08.023 END TEST nvmf_auth_target 00:20:08.023 ************************************ 00:20:08.023 11:35:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:08.023 11:35:41 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:08.023 11:35:41 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:08.023 11:35:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:08.023 11:35:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.023 11:35:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.023 ************************************ 00:20:08.023 START TEST nvmf_bdevio_no_huge 00:20:08.023 ************************************ 00:20:08.023 11:35:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:08.023 * Looking for test storage... 00:20:08.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.023 11:35:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:13.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:13.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:13.300 Found net devices under 0000:af:00.0: cvl_0_0 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:13.300 Found net devices under 0000:af:00.1: cvl_0_1 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.300 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:20:13.558 00:20:13.558 --- 10.0.0.2 ping statistics --- 00:20:13.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.558 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:13.558 00:20:13.558 --- 10.0.0.1 ping statistics --- 00:20:13.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.558 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2819893 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2819893 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2819893 ']' 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.558 11:35:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.558 [2024-07-15 11:35:47.945068] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:13.558 [2024-07-15 11:35:47.945132] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:13.816 [2024-07-15 11:35:48.057133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.074 [2024-07-15 11:35:48.297202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.074 [2024-07-15 11:35:48.297278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.074 [2024-07-15 11:35:48.297301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.074 [2024-07-15 11:35:48.297320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.074 [2024-07-15 11:35:48.297336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.074 [2024-07-15 11:35:48.297477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:14.074 [2024-07-15 11:35:48.297596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:14.074 [2024-07-15 11:35:48.297716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:14.074 [2024-07-15 11:35:48.297721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 [2024-07-15 11:35:48.937370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 Malloc0 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 [2024-07-15 11:35:48.986875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:14.638 { 00:20:14.638 "params": { 00:20:14.638 "name": "Nvme$subsystem", 00:20:14.638 "trtype": "$TEST_TRANSPORT", 00:20:14.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.638 "adrfam": "ipv4", 00:20:14.638 "trsvcid": "$NVMF_PORT", 00:20:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.638 "hdgst": ${hdgst:-false}, 00:20:14.638 "ddgst": ${ddgst:-false} 00:20:14.638 }, 00:20:14.638 "method": "bdev_nvme_attach_controller" 00:20:14.638 } 00:20:14.638 EOF 00:20:14.638 )") 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:14.638 11:35:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:14.638 11:35:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:14.638 11:35:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:14.638 "params": { 00:20:14.638 "name": "Nvme1", 00:20:14.638 "trtype": "tcp", 00:20:14.638 "traddr": "10.0.0.2", 00:20:14.638 "adrfam": "ipv4", 00:20:14.638 "trsvcid": "4420", 00:20:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.638 "hdgst": false, 00:20:14.638 "ddgst": false 00:20:14.638 }, 00:20:14.638 "method": "bdev_nvme_attach_controller" 00:20:14.638 }' 00:20:14.638 [2024-07-15 11:35:49.036335] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:14.638 [2024-07-15 11:35:49.036399] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2820175 ] 00:20:14.896 [2024-07-15 11:35:49.123566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.896 [2024-07-15 11:35:49.241441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.896 [2024-07-15 11:35:49.241555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.896 [2024-07-15 11:35:49.241555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.153 I/O targets: 00:20:15.153 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:15.153 00:20:15.153 00:20:15.153 CUnit - A unit testing framework for C - Version 2.1-3 00:20:15.153 http://cunit.sourceforge.net/ 00:20:15.153 00:20:15.153 00:20:15.153 Suite: bdevio tests on: Nvme1n1 00:20:15.153 Test: blockdev write read block ...passed 00:20:15.410 Test: blockdev write zeroes read block ...passed 00:20:15.410 Test: blockdev write zeroes read no split ...passed 00:20:15.410 Test: blockdev write zeroes read split ...passed 00:20:15.410 Test: blockdev write zeroes read split partial ...passed 00:20:15.410 Test: blockdev reset ...[2024-07-15 11:35:49.729906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.410 [2024-07-15 11:35:49.729986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6520 (9): Bad file descriptor 00:20:15.410 [2024-07-15 11:35:49.749653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:15.410 passed 00:20:15.410 Test: blockdev write read 8 blocks ...passed 00:20:15.410 Test: blockdev write read size > 128k ...passed 00:20:15.410 Test: blockdev write read invalid size ...passed 00:20:15.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:15.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:15.410 Test: blockdev write read max offset ...passed 00:20:15.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:15.667 Test: blockdev writev readv 8 blocks ...passed 00:20:15.667 Test: blockdev writev readv 30 x 1block ...passed 00:20:15.667 Test: blockdev writev readv block ...passed 00:20:15.667 Test: blockdev writev readv size > 128k ...passed 00:20:15.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:15.667 Test: blockdev comparev and writev ...[2024-07-15 11:35:49.971250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.971326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.971369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.971394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.972026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.972059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.972096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.972118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.972795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.972829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.972865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.972887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.973600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.973633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:49.973670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:15.667 [2024-07-15 11:35:49.973692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:15.667 passed 00:20:15.667 Test: blockdev nvme passthru rw ...passed 00:20:15.667 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:35:50.055770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.667 [2024-07-15 11:35:50.055818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:50.056096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.667 [2024-07-15 11:35:50.056126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:50.056399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.667 [2024-07-15 11:35:50.056429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:15.667 [2024-07-15 11:35:50.056688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:15.667 [2024-07-15 11:35:50.056718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:15.667 passed 00:20:15.667 Test: blockdev nvme admin passthru ...passed 00:20:15.667 Test: blockdev copy ...passed 00:20:15.667 00:20:15.667 Run Summary: Type Total Ran Passed Failed Inactive 00:20:15.667 suites 1 1 n/a 0 0 00:20:15.668 tests 23 23 23 0 0 00:20:15.668 asserts 152 152 152 0 n/a 00:20:15.668 00:20:15.668 Elapsed time = 1.097 seconds 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.234 rmmod nvme_tcp 00:20:16.234 rmmod nvme_fabrics 00:20:16.234 rmmod nvme_keyring 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2819893 ']' 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2819893 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2819893 ']' 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2819893 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2819893 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2819893' 00:20:16.234 killing process with pid 2819893 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2819893 00:20:16.234 11:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2819893 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.170 11:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.080 11:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.080 00:20:19.080 real 0m11.411s 00:20:19.080 user 0m15.324s 00:20:19.080 sys 0m5.819s 00:20:19.080 11:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.080 11:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.080 ************************************ 00:20:19.080 END TEST nvmf_bdevio_no_huge 00:20:19.080 ************************************ 00:20:19.080 11:35:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:19.080 11:35:53 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:19.080 11:35:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.080 11:35:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.080 11:35:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.080 ************************************ 00:20:19.080 START TEST nvmf_tls 00:20:19.080 ************************************ 00:20:19.080 11:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:19.339 * Looking for test storage... 00:20:19.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.339 11:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:24.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:24.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.612 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:24.879 Found net devices under 0000:af:00.0: cvl_0_0 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:24.879 Found net devices under 0000:af:00.1: cvl_0_1 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:20:24.879 00:20:24.879 --- 10.0.0.2 ping statistics --- 00:20:24.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.879 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:24.879 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:20:25.229 00:20:25.229 --- 10.0.0.1 ping statistics --- 00:20:25.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.229 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2824100 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:25.229 11:35:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2824100 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2824100 ']' 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.230 11:35:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 [2024-07-15 11:35:59.448561] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:25.230 [2024-07-15 11:35:59.448622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.230 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.230 [2024-07-15 11:35:59.541234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.230 [2024-07-15 11:35:59.644537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.230 [2024-07-15 11:35:59.644585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.230 [2024-07-15 11:35:59.644598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.230 [2024-07-15 11:35:59.644609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.230 [2024-07-15 11:35:59.644619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.230 [2024-07-15 11:35:59.644644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:26.609 true 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.609 11:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:26.868 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:26.868 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:26.868 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:27.127 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.127 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:27.386 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:27.386 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:27.386 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:27.644 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.644 11:36:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:27.903 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:27.903 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:27.903 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.903 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:28.161 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:28.161 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:28.161 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:28.420 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:28.420 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:28.420 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:28.420 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:28.420 11:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:28.679 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:28.679 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:28.939 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.nbtjDLiMkJ 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.wJdUbDkPsF 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.nbtjDLiMkJ 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wJdUbDkPsF 00:20:29.198 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:29.457 11:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:29.718 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.nbtjDLiMkJ 00:20:29.718 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nbtjDLiMkJ 00:20:29.718 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.977 [2024-07-15 11:36:04.283700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.977 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:30.235 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:30.493 [2024-07-15 11:36:04.773039] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.493 [2024-07-15 11:36:04.773282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.493 11:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:30.752 malloc0 00:20:30.752 11:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:31.010 11:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nbtjDLiMkJ 00:20:31.269 [2024-07-15 11:36:05.509284] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:31.269 11:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nbtjDLiMkJ 00:20:31.269 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.242 Initializing NVMe Controllers 00:20:41.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.242 Initialization complete. Launching workers. 00:20:41.242 ======================================================== 00:20:41.242 Latency(us) 00:20:41.242 Device Information : IOPS MiB/s Average min max 00:20:41.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8411.90 32.86 7610.53 1265.09 8219.96 00:20:41.242 ======================================================== 00:20:41.242 Total : 8411.90 32.86 7610.53 1265.09 8219.96 00:20:41.243 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nbtjDLiMkJ 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nbtjDLiMkJ' 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2827430 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2827430 /var/tmp/bdevperf.sock 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2827430 ']' 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.243 11:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.243 [2024-07-15 11:36:15.692642] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:41.243 [2024-07-15 11:36:15.692702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827430 ] 00:20:41.502 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.502 [2024-07-15 11:36:15.804691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.502 [2024-07-15 11:36:15.953326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.439 11:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.439 11:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.439 11:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nbtjDLiMkJ 00:20:42.439 [2024-07-15 11:36:16.864359] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.439 [2024-07-15 11:36:16.864504] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.695 TLSTESTn1 00:20:42.695 11:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:42.695 Running I/O for 10 seconds... 00:20:52.667 00:20:52.667 Latency(us) 00:20:52.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.667 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.667 Verification LBA range: start 0x0 length 0x2000 00:20:52.667 TLSTESTn1 : 10.02 2848.69 11.13 0.00 0.00 44813.15 10902.81 43372.92 00:20:52.667 =================================================================================================================== 00:20:52.667 Total : 2848.69 11.13 0.00 0.00 44813.15 10902.81 43372.92 00:20:52.667 0 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2827430 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2827430 ']' 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2827430 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827430 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827430' 00:20:52.927 killing process with pid 2827430 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2827430 00:20:52.927 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.927 00:20:52.927 Latency(us) 00:20:52.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.927 =================================================================================================================== 00:20:52.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.927 [2024-07-15 11:36:27.193678] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:52.927 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2827430 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJdUbDkPsF 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJdUbDkPsF 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJdUbDkPsF 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wJdUbDkPsF' 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829451 00:20:53.186 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829451 /var/tmp/bdevperf.sock 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2829451 ']' 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.187 11:36:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.187 [2024-07-15 11:36:27.604485] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:53.187 [2024-07-15 11:36:27.604555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829451 ] 00:20:53.187 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.444 [2024-07-15 11:36:27.717497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.445 [2024-07-15 11:36:27.866115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJdUbDkPsF 00:20:54.381 [2024-07-15 11:36:28.763385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.381 [2024-07-15 11:36:28.763528] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:54.381 [2024-07-15 11:36:28.772208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:54.381 [2024-07-15 11:36:28.772454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029af0 (107): Transport endpoint is not connected 00:20:54.381 [2024-07-15 11:36:28.773434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029af0 (9): Bad file descriptor 00:20:54.381 [2024-07-15 11:36:28.774431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:54.381 [2024-07-15 11:36:28.774459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:54.381 [2024-07-15 11:36:28.774485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.381 request: 00:20:54.381 { 00:20:54.381 "name": "TLSTEST", 00:20:54.381 "trtype": "tcp", 00:20:54.381 "traddr": "10.0.0.2", 00:20:54.381 "adrfam": "ipv4", 00:20:54.381 "trsvcid": "4420", 00:20:54.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.381 "prchk_reftag": false, 00:20:54.381 "prchk_guard": false, 00:20:54.381 "hdgst": false, 00:20:54.381 "ddgst": false, 00:20:54.381 "psk": "/tmp/tmp.wJdUbDkPsF", 00:20:54.381 "method": "bdev_nvme_attach_controller", 00:20:54.381 "req_id": 1 00:20:54.381 } 00:20:54.381 Got JSON-RPC error response 00:20:54.381 response: 00:20:54.381 { 00:20:54.381 "code": -5, 00:20:54.381 "message": "Input/output error" 00:20:54.381 } 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2829451 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2829451 ']' 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2829451 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829451 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829451' 00:20:54.381 killing process with pid 2829451 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2829451 00:20:54.381 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.381 00:20:54.381 Latency(us) 00:20:54.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.381 =================================================================================================================== 00:20:54.381 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.381 [2024-07-15 11:36:28.843587] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.381 11:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2829451 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nbtjDLiMkJ 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nbtjDLiMkJ 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nbtjDLiMkJ 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nbtjDLiMkJ' 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829723 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829723 /var/tmp/bdevperf.sock 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2829723 ']' 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.949 11:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.949 [2024-07-15 11:36:29.177069] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:54.949 [2024-07-15 11:36:29.177128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829723 ] 00:20:54.949 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.949 [2024-07-15 11:36:29.289025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.207 [2024-07-15 11:36:29.437137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.772 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.772 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:55.772 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.nbtjDLiMkJ 00:20:56.030 [2024-07-15 11:36:30.285723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.030 [2024-07-15 11:36:30.285879] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.030 [2024-07-15 11:36:30.294246] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.030 [2024-07-15 11:36:30.294286] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.030 [2024-07-15 11:36:30.294328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:56.030 [2024-07-15 11:36:30.294654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f2af0 (107): Transport endpoint is not connected 00:20:56.030 [2024-07-15 11:36:30.295635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f2af0 (9): Bad file descriptor 00:20:56.030 [2024-07-15 11:36:30.296634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.030 [2024-07-15 11:36:30.296661] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:56.030 [2024-07-15 11:36:30.296686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.030 request: 00:20:56.030 { 00:20:56.030 "name": "TLSTEST", 00:20:56.030 "trtype": "tcp", 00:20:56.030 "traddr": "10.0.0.2", 00:20:56.030 "adrfam": "ipv4", 00:20:56.030 "trsvcid": "4420", 00:20:56.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.030 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.030 "prchk_reftag": false, 00:20:56.030 "prchk_guard": false, 00:20:56.030 "hdgst": false, 00:20:56.030 "ddgst": false, 00:20:56.030 "psk": "/tmp/tmp.nbtjDLiMkJ", 00:20:56.030 "method": "bdev_nvme_attach_controller", 00:20:56.030 "req_id": 1 00:20:56.030 } 00:20:56.030 Got JSON-RPC error response 00:20:56.030 response: 00:20:56.030 { 00:20:56.030 "code": -5, 00:20:56.030 "message": "Input/output error" 00:20:56.030 } 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2829723 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2829723 ']' 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2829723 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829723 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829723' 00:20:56.030 killing process with pid 2829723 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2829723 00:20:56.030 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.030 00:20:56.030 Latency(us) 00:20:56.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.030 =================================================================================================================== 00:20:56.030 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.030 [2024-07-15 11:36:30.375546] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.030 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2829723 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nbtjDLiMkJ 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nbtjDLiMkJ 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nbtjDLiMkJ 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nbtjDLiMkJ' 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830002 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830002 /var/tmp/bdevperf.sock 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2830002 ']' 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.288 11:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.288 [2024-07-15 11:36:30.720754] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:56.288 [2024-07-15 11:36:30.720817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830002 ] 00:20:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.546 [2024-07-15 11:36:30.834243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.546 [2024-07-15 11:36:30.974225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.480 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.480 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.480 11:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nbtjDLiMkJ 00:20:57.480 [2024-07-15 11:36:31.902342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.480 [2024-07-15 11:36:31.902498] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.480 [2024-07-15 11:36:31.915671] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:57.480 [2024-07-15 11:36:31.915704] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:57.480 [2024-07-15 11:36:31.915739] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.480 [2024-07-15 11:36:31.916519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173eaf0 (107): Transport endpoint is not connected 00:20:57.480 [2024-07-15 11:36:31.917502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173eaf0 (9): Bad file descriptor 00:20:57.480 [2024-07-15 11:36:31.918500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.480 [2024-07-15 11:36:31.918526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:57.480 [2024-07-15 11:36:31.918556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.480 request: 00:20:57.480 { 00:20:57.480 "name": "TLSTEST", 00:20:57.480 "trtype": "tcp", 00:20:57.480 "traddr": "10.0.0.2", 00:20:57.480 "adrfam": "ipv4", 00:20:57.480 "trsvcid": "4420", 00:20:57.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.480 "prchk_reftag": false, 00:20:57.480 "prchk_guard": false, 00:20:57.480 "hdgst": false, 00:20:57.480 "ddgst": false, 00:20:57.480 "psk": "/tmp/tmp.nbtjDLiMkJ", 00:20:57.480 "method": "bdev_nvme_attach_controller", 00:20:57.480 "req_id": 1 00:20:57.480 } 00:20:57.480 Got JSON-RPC error response 00:20:57.480 response: 00:20:57.480 { 00:20:57.480 "code": -5, 00:20:57.480 "message": "Input/output error" 00:20:57.480 } 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830002 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2830002 ']' 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2830002 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830002 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830002' 00:20:57.739 killing process with pid 2830002 00:20:57.739 11:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2830002 00:20:57.739 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.739 00:20:57.739 Latency(us) 00:20:57.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.739 =================================================================================================================== 00:20:57.739 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.739 [2024-07-15 11:36:32.000580] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.739 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2830002 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830308 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830308 /var/tmp/bdevperf.sock 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2830308 ']' 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.998 11:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.998 [2024-07-15 11:36:32.399128] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:57.998 [2024-07-15 11:36:32.399200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830308 ] 00:20:57.998 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.256 [2024-07-15 11:36:32.513598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.256 [2024-07-15 11:36:32.657174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:59.192 [2024-07-15 11:36:33.581292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.192 [2024-07-15 11:36:33.583132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f19030 (9): Bad file descriptor 00:20:59.192 [2024-07-15 11:36:33.584125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:59.192 [2024-07-15 11:36:33.584153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.192 [2024-07-15 11:36:33.584177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.192 request: 00:20:59.192 { 00:20:59.192 "name": "TLSTEST", 00:20:59.192 "trtype": "tcp", 00:20:59.192 "traddr": "10.0.0.2", 00:20:59.192 "adrfam": "ipv4", 00:20:59.192 "trsvcid": "4420", 00:20:59.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.192 "prchk_reftag": false, 00:20:59.192 "prchk_guard": false, 00:20:59.192 "hdgst": false, 00:20:59.192 "ddgst": false, 00:20:59.192 "method": "bdev_nvme_attach_controller", 00:20:59.192 "req_id": 1 00:20:59.192 } 00:20:59.192 Got JSON-RPC error response 00:20:59.192 response: 00:20:59.192 { 00:20:59.192 "code": -5, 00:20:59.192 "message": "Input/output error" 00:20:59.192 } 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830308 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2830308 ']' 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2830308 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.192 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830308 00:20:59.450 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:59.450 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:59.450 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830308' 00:20:59.450 killing process with pid 2830308 00:20:59.450 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2830308 00:20:59.450 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.450 00:20:59.450 Latency(us) 00:20:59.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.451 =================================================================================================================== 00:20:59.451 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.451 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2830308 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2824100 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2824100 ']' 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2824100 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2824100 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2824100' 00:20:59.710 killing process with pid 2824100 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2824100 00:20:59.710 [2024-07-15 11:36:33.994799] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.710 11:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2824100 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IRRGX256Jx 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IRRGX256Jx 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2830800 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2830800 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2830800 ']' 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.969 11:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.969 [2024-07-15 11:36:34.384959] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:20:59.969 [2024-07-15 11:36:34.385028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.969 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.228 [2024-07-15 11:36:34.472844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.228 [2024-07-15 11:36:34.569596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.228 [2024-07-15 11:36:34.569647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.228 [2024-07-15 11:36:34.569660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.228 [2024-07-15 11:36:34.569672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.228 [2024-07-15 11:36:34.569682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.228 [2024-07-15 11:36:34.569715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IRRGX256Jx 00:21:01.163 11:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.163 [2024-07-15 11:36:35.601603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.421 11:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.679 11:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.679 [2024-07-15 11:36:36.115000] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.679 [2024-07-15 11:36:36.115236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.937 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.937 malloc0 00:21:02.196 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.455 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:02.455 [2024-07-15 11:36:36.891542] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IRRGX256Jx 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IRRGX256Jx' 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2831167 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2831167 /var/tmp/bdevperf.sock 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2831167 ']' 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.714 11:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.714 [2024-07-15 11:36:36.975361] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:02.714 [2024-07-15 11:36:36.975422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831167 ] 00:21:02.714 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.714 [2024-07-15 11:36:37.088080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.973 [2024-07-15 11:36:37.233413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.542 11:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.542 11:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.542 11:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:03.800 [2024-07-15 11:36:38.156790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.800 [2024-07-15 11:36:38.156957] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.800 TLSTESTn1 00:21:04.059 11:36:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:04.059 Running I/O for 10 seconds... 00:21:14.038 00:21:14.039 Latency(us) 00:21:14.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:14.039 Verification LBA range: start 0x0 length 0x2000 00:21:14.039 TLSTESTn1 : 10.02 2797.38 10.93 0.00 0.00 45636.44 9949.56 53143.74 00:21:14.039 =================================================================================================================== 00:21:14.039 Total : 2797.38 10.93 0.00 0.00 45636.44 9949.56 53143.74 00:21:14.039 0 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2831167 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2831167 ']' 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2831167 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.039 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2831167 00:21:14.332 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:14.332 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:14.332 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2831167' 00:21:14.332 killing process with pid 2831167 00:21:14.332 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2831167 00:21:14.332 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.332 00:21:14.332 Latency(us) 00:21:14.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.332 =================================================================================================================== 00:21:14.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.332 [2024-07-15 11:36:48.504449] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.332 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2831167 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IRRGX256Jx 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IRRGX256Jx 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IRRGX256Jx 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IRRGX256Jx 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IRRGX256Jx' 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2833201 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.651 11:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2833201 /var/tmp/bdevperf.sock 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2833201 ']' 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.652 11:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.652 [2024-07-15 11:36:48.920224] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:14.652 [2024-07-15 11:36:48.920301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833201 ] 00:21:14.652 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.652 [2024-07-15 11:36:49.034362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.910 [2024-07-15 11:36:49.182534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.477 11:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.477 11:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:15.477 11:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:15.736 [2024-07-15 11:36:50.108537] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.736 [2024-07-15 11:36:50.108639] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:15.736 [2024-07-15 11:36:50.108660] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IRRGX256Jx 00:21:15.736 request: 00:21:15.736 { 00:21:15.736 "name": "TLSTEST", 00:21:15.736 "trtype": "tcp", 00:21:15.736 "traddr": "10.0.0.2", 00:21:15.736 "adrfam": "ipv4", 00:21:15.736 "trsvcid": "4420", 00:21:15.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.736 "prchk_reftag": false, 00:21:15.736 "prchk_guard": false, 00:21:15.736 "hdgst": false, 00:21:15.736 "ddgst": false, 00:21:15.736 "psk": "/tmp/tmp.IRRGX256Jx", 00:21:15.736 "method": "bdev_nvme_attach_controller", 00:21:15.736 "req_id": 1 00:21:15.736 } 00:21:15.736 Got JSON-RPC error response 00:21:15.736 response: 00:21:15.736 { 00:21:15.736 "code": -1, 00:21:15.736 "message": "Operation not permitted" 00:21:15.736 } 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2833201 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2833201 ']' 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2833201 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833201 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833201' 00:21:15.736 killing process with pid 2833201 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2833201 00:21:15.736 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.736 00:21:15.736 Latency(us) 00:21:15.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.736 =================================================================================================================== 00:21:15.736 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:15.736 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2833201 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2830800 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2830800 ']' 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2830800 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830800 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830800' 00:21:16.304 killing process with pid 2830800 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2830800 00:21:16.304 [2024-07-15 11:36:50.517695] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.304 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2830800 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2833632 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2833632 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2833632 ']' 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.563 11:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.563 [2024-07-15 11:36:50.865311] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:16.563 [2024-07-15 11:36:50.865381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.563 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.563 [2024-07-15 11:36:50.954300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.821 [2024-07-15 11:36:51.053051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.821 [2024-07-15 11:36:51.053105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.821 [2024-07-15 11:36:51.053117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.821 [2024-07-15 11:36:51.053129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.821 [2024-07-15 11:36:51.053138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.821 [2024-07-15 11:36:51.053165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.388 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.388 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.388 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.388 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.388 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IRRGX256Jx 00:21:17.646 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.646 [2024-07-15 11:36:52.089646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.905 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.163 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.163 [2024-07-15 11:36:52.611066] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.163 [2024-07-15 11:36:52.611312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.421 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.421 malloc0 00:21:18.679 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.940 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:18.940 [2024-07-15 11:36:53.399617] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:18.940 [2024-07-15 11:36:53.399654] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:18.940 [2024-07-15 11:36:53.399695] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:19.199 request: 00:21:19.199 { 00:21:19.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.199 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.199 "psk": "/tmp/tmp.IRRGX256Jx", 00:21:19.199 "method": "nvmf_subsystem_add_host", 00:21:19.199 "req_id": 1 00:21:19.199 } 00:21:19.199 Got JSON-RPC error response 00:21:19.199 response: 00:21:19.199 { 00:21:19.199 "code": -32603, 00:21:19.199 "message": "Internal error" 00:21:19.199 } 00:21:19.199 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:19.199 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:19.199 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:19.199 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2833632 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2833632 ']' 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2833632 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833632 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833632' 00:21:19.200 killing process with pid 2833632 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2833632 00:21:19.200 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2833632 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IRRGX256Jx 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2834163 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2834163 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2834163 ']' 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.457 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.458 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.458 11:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.458 [2024-07-15 11:36:53.859731] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:19.458 [2024-07-15 11:36:53.859800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.458 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.715 [2024-07-15 11:36:53.948487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.715 [2024-07-15 11:36:54.052778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.715 [2024-07-15 11:36:54.052825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.715 [2024-07-15 11:36:54.052838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.716 [2024-07-15 11:36:54.052849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.716 [2024-07-15 11:36:54.052859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.716 [2024-07-15 11:36:54.052884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IRRGX256Jx 00:21:20.651 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.651 [2024-07-15 11:36:55.077430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.651 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.910 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.170 [2024-07-15 11:36:55.598843] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.170 [2024-07-15 11:36:55.599081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.170 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.429 malloc0 00:21:21.687 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.946 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:21.946 [2024-07-15 11:36:56.379510] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2834583 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2834583 /var/tmp/bdevperf.sock 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2834583 ']' 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.206 11:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.206 [2024-07-15 11:36:56.454340] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:22.206 [2024-07-15 11:36:56.454400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834583 ] 00:21:22.206 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.206 [2024-07-15 11:36:56.567021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.465 [2024-07-15 11:36:56.717404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.033 11:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.033 11:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.033 11:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:23.292 [2024-07-15 11:36:57.649434] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.292 [2024-07-15 11:36:57.649591] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.292 TLSTESTn1 00:21:23.550 11:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:23.809 11:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:23.809 "subsystems": [ 00:21:23.809 { 00:21:23.809 "subsystem": "keyring", 00:21:23.809 "config": [] 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "subsystem": "iobuf", 00:21:23.809 "config": [ 00:21:23.809 { 00:21:23.809 "method": "iobuf_set_options", 00:21:23.809 "params": { 00:21:23.809 "small_pool_count": 8192, 00:21:23.809 "large_pool_count": 1024, 00:21:23.809 "small_bufsize": 8192, 00:21:23.809 "large_bufsize": 135168 00:21:23.809 } 00:21:23.809 } 00:21:23.809 ] 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "subsystem": "sock", 00:21:23.809 "config": [ 00:21:23.809 { 00:21:23.809 "method": "sock_set_default_impl", 00:21:23.809 "params": { 00:21:23.809 "impl_name": "posix" 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "sock_impl_set_options", 00:21:23.809 "params": { 00:21:23.809 "impl_name": "ssl", 00:21:23.809 "recv_buf_size": 4096, 00:21:23.809 "send_buf_size": 4096, 00:21:23.809 "enable_recv_pipe": true, 00:21:23.809 "enable_quickack": false, 00:21:23.809 "enable_placement_id": 0, 00:21:23.809 "enable_zerocopy_send_server": true, 00:21:23.809 "enable_zerocopy_send_client": false, 00:21:23.809 "zerocopy_threshold": 0, 00:21:23.809 "tls_version": 0, 00:21:23.809 "enable_ktls": false 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "sock_impl_set_options", 00:21:23.809 "params": { 00:21:23.809 "impl_name": "posix", 00:21:23.809 "recv_buf_size": 2097152, 00:21:23.809 "send_buf_size": 2097152, 00:21:23.809 "enable_recv_pipe": true, 00:21:23.809 "enable_quickack": false, 00:21:23.809 "enable_placement_id": 0, 00:21:23.809 "enable_zerocopy_send_server": true, 00:21:23.809 "enable_zerocopy_send_client": false, 00:21:23.809 "zerocopy_threshold": 0, 00:21:23.809 "tls_version": 0, 00:21:23.809 "enable_ktls": false 00:21:23.809 } 00:21:23.809 } 00:21:23.809 ] 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "subsystem": "vmd", 00:21:23.809 "config": [] 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "subsystem": "accel", 00:21:23.809 "config": [ 00:21:23.809 { 00:21:23.809 "method": "accel_set_options", 00:21:23.809 "params": { 00:21:23.809 "small_cache_size": 128, 00:21:23.809 "large_cache_size": 16, 00:21:23.809 "task_count": 2048, 00:21:23.809 "sequence_count": 2048, 00:21:23.809 "buf_count": 2048 00:21:23.809 } 00:21:23.809 } 00:21:23.809 ] 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "subsystem": "bdev", 00:21:23.809 "config": [ 00:21:23.809 { 00:21:23.809 "method": "bdev_set_options", 00:21:23.809 "params": { 00:21:23.809 "bdev_io_pool_size": 65535, 00:21:23.809 "bdev_io_cache_size": 256, 00:21:23.809 "bdev_auto_examine": true, 00:21:23.809 "iobuf_small_cache_size": 128, 00:21:23.809 "iobuf_large_cache_size": 16 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_raid_set_options", 00:21:23.809 "params": { 00:21:23.809 "process_window_size_kb": 1024 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_iscsi_set_options", 00:21:23.809 "params": { 00:21:23.809 "timeout_sec": 30 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_nvme_set_options", 00:21:23.809 "params": { 00:21:23.809 "action_on_timeout": "none", 00:21:23.809 "timeout_us": 0, 00:21:23.809 "timeout_admin_us": 0, 00:21:23.809 "keep_alive_timeout_ms": 10000, 00:21:23.809 "arbitration_burst": 0, 00:21:23.809 "low_priority_weight": 0, 00:21:23.809 "medium_priority_weight": 0, 00:21:23.809 "high_priority_weight": 0, 00:21:23.809 "nvme_adminq_poll_period_us": 10000, 00:21:23.809 "nvme_ioq_poll_period_us": 0, 00:21:23.809 "io_queue_requests": 0, 00:21:23.809 "delay_cmd_submit": true, 00:21:23.809 "transport_retry_count": 4, 00:21:23.809 "bdev_retry_count": 3, 00:21:23.809 "transport_ack_timeout": 0, 00:21:23.809 "ctrlr_loss_timeout_sec": 0, 00:21:23.809 "reconnect_delay_sec": 0, 00:21:23.809 "fast_io_fail_timeout_sec": 0, 00:21:23.809 "disable_auto_failback": false, 00:21:23.809 "generate_uuids": false, 00:21:23.809 "transport_tos": 0, 00:21:23.809 "nvme_error_stat": false, 00:21:23.809 "rdma_srq_size": 0, 00:21:23.809 "io_path_stat": false, 00:21:23.809 "allow_accel_sequence": false, 00:21:23.809 "rdma_max_cq_size": 0, 00:21:23.809 "rdma_cm_event_timeout_ms": 0, 00:21:23.809 "dhchap_digests": [ 00:21:23.809 "sha256", 00:21:23.809 "sha384", 00:21:23.809 "sha512" 00:21:23.809 ], 00:21:23.809 "dhchap_dhgroups": [ 00:21:23.809 "null", 00:21:23.809 "ffdhe2048", 00:21:23.809 "ffdhe3072", 00:21:23.809 "ffdhe4096", 00:21:23.809 "ffdhe6144", 00:21:23.809 "ffdhe8192" 00:21:23.809 ] 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_nvme_set_hotplug", 00:21:23.809 "params": { 00:21:23.809 "period_us": 100000, 00:21:23.809 "enable": false 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_malloc_create", 00:21:23.809 "params": { 00:21:23.809 "name": "malloc0", 00:21:23.809 "num_blocks": 8192, 00:21:23.809 "block_size": 4096, 00:21:23.809 "physical_block_size": 4096, 00:21:23.809 "uuid": "8605b19b-c045-4fa1-9859-757d4b244416", 00:21:23.809 "optimal_io_boundary": 0 00:21:23.809 } 00:21:23.809 }, 00:21:23.809 { 00:21:23.809 "method": "bdev_wait_for_examine" 00:21:23.809 } 00:21:23.809 ] 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "subsystem": "nbd", 00:21:23.810 "config": [] 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "subsystem": "scheduler", 00:21:23.810 "config": [ 00:21:23.810 { 00:21:23.810 "method": "framework_set_scheduler", 00:21:23.810 "params": { 00:21:23.810 "name": "static" 00:21:23.810 } 00:21:23.810 } 00:21:23.810 ] 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "subsystem": "nvmf", 00:21:23.810 "config": [ 00:21:23.810 { 00:21:23.810 "method": "nvmf_set_config", 00:21:23.810 "params": { 00:21:23.810 "discovery_filter": "match_any", 00:21:23.810 "admin_cmd_passthru": { 00:21:23.810 "identify_ctrlr": false 00:21:23.810 } 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_set_max_subsystems", 00:21:23.810 "params": { 00:21:23.810 "max_subsystems": 1024 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_set_crdt", 00:21:23.810 "params": { 00:21:23.810 "crdt1": 0, 00:21:23.810 "crdt2": 0, 00:21:23.810 "crdt3": 0 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_create_transport", 00:21:23.810 "params": { 00:21:23.810 "trtype": "TCP", 00:21:23.810 "max_queue_depth": 128, 00:21:23.810 "max_io_qpairs_per_ctrlr": 127, 00:21:23.810 "in_capsule_data_size": 4096, 00:21:23.810 "max_io_size": 131072, 00:21:23.810 "io_unit_size": 131072, 00:21:23.810 "max_aq_depth": 128, 00:21:23.810 "num_shared_buffers": 511, 00:21:23.810 "buf_cache_size": 4294967295, 00:21:23.810 "dif_insert_or_strip": false, 00:21:23.810 "zcopy": false, 00:21:23.810 "c2h_success": false, 00:21:23.810 "sock_priority": 0, 00:21:23.810 "abort_timeout_sec": 1, 00:21:23.810 "ack_timeout": 0, 00:21:23.810 "data_wr_pool_size": 0 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_create_subsystem", 00:21:23.810 "params": { 00:21:23.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.810 "allow_any_host": false, 00:21:23.810 "serial_number": "SPDK00000000000001", 00:21:23.810 "model_number": "SPDK bdev Controller", 00:21:23.810 "max_namespaces": 10, 00:21:23.810 "min_cntlid": 1, 00:21:23.810 "max_cntlid": 65519, 00:21:23.810 "ana_reporting": false 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_subsystem_add_host", 00:21:23.810 "params": { 00:21:23.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.810 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.810 "psk": "/tmp/tmp.IRRGX256Jx" 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_subsystem_add_ns", 00:21:23.810 "params": { 00:21:23.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.810 "namespace": { 00:21:23.810 "nsid": 1, 00:21:23.810 "bdev_name": "malloc0", 00:21:23.810 "nguid": "8605B19BC0454FA19859757D4B244416", 00:21:23.810 "uuid": "8605b19b-c045-4fa1-9859-757d4b244416", 00:21:23.810 "no_auto_visible": false 00:21:23.810 } 00:21:23.810 } 00:21:23.810 }, 00:21:23.810 { 00:21:23.810 "method": "nvmf_subsystem_add_listener", 00:21:23.810 "params": { 00:21:23.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.810 "listen_address": { 00:21:23.810 "trtype": "TCP", 00:21:23.810 "adrfam": "IPv4", 00:21:23.810 "traddr": "10.0.0.2", 00:21:23.810 "trsvcid": "4420" 00:21:23.810 }, 00:21:23.810 "secure_channel": true 00:21:23.810 } 00:21:23.810 } 00:21:23.810 ] 00:21:23.810 } 00:21:23.810 ] 00:21:23.810 }' 00:21:23.810 11:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:24.071 11:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:24.071 "subsystems": [ 00:21:24.071 { 00:21:24.071 "subsystem": "keyring", 00:21:24.071 "config": [] 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "subsystem": "iobuf", 00:21:24.071 "config": [ 00:21:24.071 { 00:21:24.071 "method": "iobuf_set_options", 00:21:24.071 "params": { 00:21:24.071 "small_pool_count": 8192, 00:21:24.071 "large_pool_count": 1024, 00:21:24.071 "small_bufsize": 8192, 00:21:24.071 "large_bufsize": 135168 00:21:24.071 } 00:21:24.071 } 00:21:24.071 ] 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "subsystem": "sock", 00:21:24.071 "config": [ 00:21:24.071 { 00:21:24.071 "method": "sock_set_default_impl", 00:21:24.071 "params": { 00:21:24.071 "impl_name": "posix" 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "sock_impl_set_options", 00:21:24.071 "params": { 00:21:24.071 "impl_name": "ssl", 00:21:24.071 "recv_buf_size": 4096, 00:21:24.071 "send_buf_size": 4096, 00:21:24.071 "enable_recv_pipe": true, 00:21:24.071 "enable_quickack": false, 00:21:24.071 "enable_placement_id": 0, 00:21:24.071 "enable_zerocopy_send_server": true, 00:21:24.071 "enable_zerocopy_send_client": false, 00:21:24.071 "zerocopy_threshold": 0, 00:21:24.071 "tls_version": 0, 00:21:24.071 "enable_ktls": false 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "sock_impl_set_options", 00:21:24.071 "params": { 00:21:24.071 "impl_name": "posix", 00:21:24.071 "recv_buf_size": 2097152, 00:21:24.071 "send_buf_size": 2097152, 00:21:24.071 "enable_recv_pipe": true, 00:21:24.071 "enable_quickack": false, 00:21:24.071 "enable_placement_id": 0, 00:21:24.071 "enable_zerocopy_send_server": true, 00:21:24.071 "enable_zerocopy_send_client": false, 00:21:24.071 "zerocopy_threshold": 0, 00:21:24.071 "tls_version": 0, 00:21:24.071 "enable_ktls": false 00:21:24.071 } 00:21:24.071 } 00:21:24.071 ] 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "subsystem": "vmd", 00:21:24.071 "config": [] 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "subsystem": "accel", 00:21:24.071 "config": [ 00:21:24.071 { 00:21:24.071 "method": "accel_set_options", 00:21:24.071 "params": { 00:21:24.071 "small_cache_size": 128, 00:21:24.071 "large_cache_size": 16, 00:21:24.071 "task_count": 2048, 00:21:24.071 "sequence_count": 2048, 00:21:24.071 "buf_count": 2048 00:21:24.071 } 00:21:24.071 } 00:21:24.071 ] 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "subsystem": "bdev", 00:21:24.071 "config": [ 00:21:24.071 { 00:21:24.071 "method": "bdev_set_options", 00:21:24.071 "params": { 00:21:24.071 "bdev_io_pool_size": 65535, 00:21:24.071 "bdev_io_cache_size": 256, 00:21:24.071 "bdev_auto_examine": true, 00:21:24.071 "iobuf_small_cache_size": 128, 00:21:24.071 "iobuf_large_cache_size": 16 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "bdev_raid_set_options", 00:21:24.071 "params": { 00:21:24.071 "process_window_size_kb": 1024 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "bdev_iscsi_set_options", 00:21:24.071 "params": { 00:21:24.071 "timeout_sec": 30 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "bdev_nvme_set_options", 00:21:24.071 "params": { 00:21:24.071 "action_on_timeout": "none", 00:21:24.071 "timeout_us": 0, 00:21:24.071 "timeout_admin_us": 0, 00:21:24.071 "keep_alive_timeout_ms": 10000, 00:21:24.071 "arbitration_burst": 0, 00:21:24.071 "low_priority_weight": 0, 00:21:24.071 "medium_priority_weight": 0, 00:21:24.071 "high_priority_weight": 0, 00:21:24.071 "nvme_adminq_poll_period_us": 10000, 00:21:24.071 "nvme_ioq_poll_period_us": 0, 00:21:24.071 "io_queue_requests": 512, 00:21:24.071 "delay_cmd_submit": true, 00:21:24.071 "transport_retry_count": 4, 00:21:24.071 "bdev_retry_count": 3, 00:21:24.071 "transport_ack_timeout": 0, 00:21:24.071 "ctrlr_loss_timeout_sec": 0, 00:21:24.071 "reconnect_delay_sec": 0, 00:21:24.071 "fast_io_fail_timeout_sec": 0, 00:21:24.071 "disable_auto_failback": false, 00:21:24.071 "generate_uuids": false, 00:21:24.071 "transport_tos": 0, 00:21:24.071 "nvme_error_stat": false, 00:21:24.071 "rdma_srq_size": 0, 00:21:24.071 "io_path_stat": false, 00:21:24.071 "allow_accel_sequence": false, 00:21:24.071 "rdma_max_cq_size": 0, 00:21:24.071 "rdma_cm_event_timeout_ms": 0, 00:21:24.071 "dhchap_digests": [ 00:21:24.071 "sha256", 00:21:24.071 "sha384", 00:21:24.071 "sha512" 00:21:24.071 ], 00:21:24.071 "dhchap_dhgroups": [ 00:21:24.071 "null", 00:21:24.071 "ffdhe2048", 00:21:24.071 "ffdhe3072", 00:21:24.071 "ffdhe4096", 00:21:24.071 "ffdhe6144", 00:21:24.071 "ffdhe8192" 00:21:24.071 ] 00:21:24.071 } 00:21:24.071 }, 00:21:24.071 { 00:21:24.071 "method": "bdev_nvme_attach_controller", 00:21:24.071 "params": { 00:21:24.071 "name": "TLSTEST", 00:21:24.071 "trtype": "TCP", 00:21:24.071 "adrfam": "IPv4", 00:21:24.071 "traddr": "10.0.0.2", 00:21:24.071 "trsvcid": "4420", 00:21:24.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.071 "prchk_reftag": false, 00:21:24.071 "prchk_guard": false, 00:21:24.071 "ctrlr_loss_timeout_sec": 0, 00:21:24.071 "reconnect_delay_sec": 0, 00:21:24.071 "fast_io_fail_timeout_sec": 0, 00:21:24.071 "psk": "/tmp/tmp.IRRGX256Jx", 00:21:24.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.072 "hdgst": false, 00:21:24.072 "ddgst": false 00:21:24.072 } 00:21:24.072 }, 00:21:24.072 { 00:21:24.072 "method": "bdev_nvme_set_hotplug", 00:21:24.072 "params": { 00:21:24.072 "period_us": 100000, 00:21:24.072 "enable": false 00:21:24.072 } 00:21:24.072 }, 00:21:24.072 { 00:21:24.072 "method": "bdev_wait_for_examine" 00:21:24.072 } 00:21:24.072 ] 00:21:24.072 }, 00:21:24.072 { 00:21:24.072 "subsystem": "nbd", 00:21:24.072 "config": [] 00:21:24.072 } 00:21:24.072 ] 00:21:24.072 }' 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2834583 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2834583 ']' 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2834583 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834583 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834583' 00:21:24.072 killing process with pid 2834583 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2834583 00:21:24.072 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.072 00:21:24.072 Latency(us) 00:21:24.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.072 =================================================================================================================== 00:21:24.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:24.072 [2024-07-15 11:36:58.463970] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:24.072 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2834583 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2834163 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2834163 ']' 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2834163 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834163 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834163' 00:21:24.641 killing process with pid 2834163 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2834163 00:21:24.641 [2024-07-15 11:36:58.846806] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:24.641 11:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2834163 00:21:24.641 11:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:24.641 11:36:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.641 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.641 11:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:24.641 "subsystems": [ 00:21:24.641 { 00:21:24.641 "subsystem": "keyring", 00:21:24.641 "config": [] 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "subsystem": "iobuf", 00:21:24.641 "config": [ 00:21:24.641 { 00:21:24.641 "method": "iobuf_set_options", 00:21:24.641 "params": { 00:21:24.641 "small_pool_count": 8192, 00:21:24.641 "large_pool_count": 1024, 00:21:24.641 "small_bufsize": 8192, 00:21:24.641 "large_bufsize": 135168 00:21:24.641 } 00:21:24.641 } 00:21:24.641 ] 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "subsystem": "sock", 00:21:24.641 "config": [ 00:21:24.641 { 00:21:24.641 "method": "sock_set_default_impl", 00:21:24.641 "params": { 00:21:24.641 "impl_name": "posix" 00:21:24.641 } 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "method": "sock_impl_set_options", 00:21:24.641 "params": { 00:21:24.641 "impl_name": "ssl", 00:21:24.641 "recv_buf_size": 4096, 00:21:24.641 "send_buf_size": 4096, 00:21:24.641 "enable_recv_pipe": true, 00:21:24.641 "enable_quickack": false, 00:21:24.641 "enable_placement_id": 0, 00:21:24.641 "enable_zerocopy_send_server": true, 00:21:24.641 "enable_zerocopy_send_client": false, 00:21:24.641 "zerocopy_threshold": 0, 00:21:24.641 "tls_version": 0, 00:21:24.641 "enable_ktls": false 00:21:24.641 } 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "method": "sock_impl_set_options", 00:21:24.641 "params": { 00:21:24.641 "impl_name": "posix", 00:21:24.641 "recv_buf_size": 2097152, 00:21:24.641 "send_buf_size": 2097152, 00:21:24.641 "enable_recv_pipe": true, 00:21:24.641 "enable_quickack": false, 00:21:24.641 "enable_placement_id": 0, 00:21:24.641 "enable_zerocopy_send_server": true, 00:21:24.641 "enable_zerocopy_send_client": false, 00:21:24.641 "zerocopy_threshold": 0, 00:21:24.641 "tls_version": 0, 00:21:24.641 "enable_ktls": false 00:21:24.641 } 00:21:24.641 } 00:21:24.641 ] 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "subsystem": "vmd", 00:21:24.641 "config": [] 00:21:24.641 }, 00:21:24.641 { 00:21:24.641 "subsystem": "accel", 00:21:24.641 "config": [ 00:21:24.641 { 00:21:24.641 "method": "accel_set_options", 00:21:24.641 "params": { 00:21:24.641 "small_cache_size": 128, 00:21:24.641 "large_cache_size": 16, 00:21:24.641 "task_count": 2048, 00:21:24.641 "sequence_count": 2048, 00:21:24.641 "buf_count": 2048 00:21:24.641 } 00:21:24.641 } 00:21:24.641 ] 00:21:24.641 }, 00:21:24.642 { 00:21:24.642 "subsystem": "bdev", 00:21:24.642 "config": [ 00:21:24.642 { 00:21:24.642 "method": "bdev_set_options", 00:21:24.642 "params": { 00:21:24.642 "bdev_io_pool_size": 65535, 00:21:24.642 "bdev_io_cache_size": 256, 00:21:24.642 "bdev_auto_examine": true, 00:21:24.642 "iobuf_small_cache_size": 128, 00:21:24.642 "iobuf_large_cache_size": 16 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_raid_set_options", 00:21:24.642 "params": { 00:21:24.642 "process_window_size_kb": 1024 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_iscsi_set_options", 00:21:24.642 "params": { 00:21:24.642 "timeout_sec": 30 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_nvme_set_options", 00:21:24.642 "params": { 00:21:24.642 "action_on_timeout": "none", 00:21:24.642 "timeout_us": 0, 00:21:24.642 "timeout_admin_us": 0, 00:21:24.642 "keep_alive_timeout_ms": 10000, 00:21:24.642 "arbitration_burst": 0, 00:21:24.642 "low_priority_weight": 0, 00:21:24.642 "medium_priority_weight": 0, 00:21:24.642 "high_priority_weight": 0, 00:21:24.642 "nvme_adminq_poll_period_us": 10000, 00:21:24.642 "nvme_ioq_poll_period_us": 0, 00:21:24.642 "io_queue_requests": 0, 00:21:24.642 "delay_cmd_submit": true, 00:21:24.642 "transport_retry_count": 4, 00:21:24.642 "bdev_retry_count": 3, 00:21:24.642 "transport_ack_timeout": 0, 00:21:24.642 "ctrlr_loss_timeout_sec": 0, 00:21:24.642 "reconnect_delay_sec": 0, 00:21:24.642 "fast_io_fail_timeout_sec": 0, 00:21:24.642 "disable_auto_failback": false, 00:21:24.642 "generate_uuids": false, 00:21:24.642 "transport_tos": 0, 00:21:24.642 "nvme_error_stat": false, 00:21:24.642 "rdma_srq_size": 0, 00:21:24.642 "io_path_stat": false, 00:21:24.642 "allow_accel_sequence": false, 00:21:24.642 "rdma_max_cq_size": 0, 00:21:24.642 "rdma_cm_event_timeout_ms": 0, 00:21:24.642 "dhchap_digests": [ 00:21:24.642 "sha256", 00:21:24.642 "sha384", 00:21:24.642 "sha512" 00:21:24.642 ], 00:21:24.642 "dhchap_dhgroups": [ 00:21:24.642 "null", 00:21:24.642 "ffdhe2048", 00:21:24.642 "ffdhe3072", 00:21:24.642 "ffdhe4096", 00:21:24.642 "ffdhe6144", 00:21:24.642 "ffdhe8192" 00:21:24.642 ] 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_nvme_set_hotplug", 00:21:24.642 "params": { 00:21:24.642 "period_us": 100000, 00:21:24.642 "enable": false 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_malloc_create", 00:21:24.642 "params": { 00:21:24.642 "name": "malloc0", 00:21:24.642 "num_blocks": 8192, 00:21:24.642 "block_size": 4096, 00:21:24.642 "physical_block_size": 4096, 00:21:24.642 "uuid": "8605b19b-c045-4fa1-9859-757d4b244416", 00:21:24.642 "optimal_io_boundary": 0 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "bdev_wait_for_examine" 00:21:24.642 } 00:21:24.642 ] 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "subsystem": "nbd", 00:21:24.642 "config": [] 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "subsystem": "scheduler", 00:21:24.642 "config": [ 00:21:24.642 { 00:21:24.642 "method": "framework_set_scheduler", 00:21:24.642 "params": { 00:21:24.642 "name": "static" 00:21:24.642 } 00:21:24.642 } 00:21:24.642 ] 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "subsystem": "nvmf", 00:21:24.642 "config": [ 00:21:24.642 { 00:21:24.642 "method": "nvmf_set_config", 00:21:24.642 "params": { 00:21:24.642 "discovery_filter": "match_any", 00:21:24.642 "admin_cmd_passthru": { 00:21:24.642 "identify_ctrlr": false 00:21:24.642 } 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_set_max_subsystems", 00:21:24.642 "params": { 00:21:24.642 "max_subsystems": 1024 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_set_crdt", 00:21:24.642 "params": { 00:21:24.642 "crdt1": 0, 00:21:24.642 "crdt2": 0, 00:21:24.642 "crdt3": 0 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_create_transport", 00:21:24.642 "params": { 00:21:24.642 "trtype": "TCP", 00:21:24.642 "max_queue_depth": 128, 00:21:24.642 "max_io_qpairs_per_ctrlr": 127, 00:21:24.642 "in_capsule_data_size": 4096, 00:21:24.642 "max_io_size": 131072, 00:21:24.642 "io_unit_size": 131072, 00:21:24.642 "max_aq_depth": 128, 00:21:24.642 "num_shared_buffers": 511, 00:21:24.642 "buf_cache_size": 4294967295, 00:21:24.642 "dif_insert_or_strip": false, 00:21:24.642 "zcopy": false, 00:21:24.642 "c2h_success": false, 00:21:24.642 "sock_priority": 0, 00:21:24.642 "abort_timeout_sec": 1, 00:21:24.642 "ack_timeout": 0, 00:21:24.642 "data_wr_pool_size": 0 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_create_subsystem", 00:21:24.642 "params": { 00:21:24.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.642 "allow_any_host": false, 00:21:24.642 "serial_number": "SPDK00000000000001", 00:21:24.642 "model_number": "SPDK bdev Controller", 00:21:24.642 "max_namespaces": 10, 00:21:24.642 "min_cntlid": 1, 00:21:24.642 "max_cntlid": 65519, 00:21:24.642 "ana_reporting": false 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_subsystem_add_host", 00:21:24.642 "params": { 00:21:24.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.642 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.642 "psk": "/tmp/tmp.IRRGX256Jx" 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_subsystem_add_ns", 00:21:24.642 "params": { 00:21:24.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.642 "namespace": { 00:21:24.642 "nsid": 1, 00:21:24.642 "bdev_name": "malloc0", 00:21:24.642 "nguid": "8605B19BC0454FA19859757D4B244416", 00:21:24.642 "uuid": "8605b19b-c045-4fa1-9859-757d4b244416", 00:21:24.642 "no_auto_visible": false 00:21:24.642 } 00:21:24.642 } 00:21:24.642 }, 00:21:24.642 { 00:21:24.642 "method": "nvmf_subsystem_add_listener", 00:21:24.642 "params": { 00:21:24.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.642 "listen_address": { 00:21:24.642 "trtype": "TCP", 00:21:24.642 "adrfam": "IPv4", 00:21:24.642 "traddr": "10.0.0.2", 00:21:24.642 "trsvcid": "4420" 00:21:24.642 }, 00:21:24.642 "secure_channel": true 00:21:24.642 } 00:21:24.642 } 00:21:24.642 ] 00:21:24.642 } 00:21:24.642 ] 00:21:24.642 }' 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2835124 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2835124 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2835124 ']' 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.642 11:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.902 [2024-07-15 11:36:59.143425] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:24.902 [2024-07-15 11:36:59.143482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.902 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.902 [2024-07-15 11:36:59.227845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.902 [2024-07-15 11:36:59.330009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.902 [2024-07-15 11:36:59.330056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.902 [2024-07-15 11:36:59.330069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.902 [2024-07-15 11:36:59.330080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.902 [2024-07-15 11:36:59.330089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.902 [2024-07-15 11:36:59.330163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.160 [2024-07-15 11:36:59.549102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.160 [2024-07-15 11:36:59.565007] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:25.160 [2024-07-15 11:36:59.581075] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.160 [2024-07-15 11:36:59.592566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2835398 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2835398 /var/tmp/bdevperf.sock 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2835398 ']' 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.728 11:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:25.728 "subsystems": [ 00:21:25.728 { 00:21:25.728 "subsystem": "keyring", 00:21:25.728 "config": [] 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "subsystem": "iobuf", 00:21:25.728 "config": [ 00:21:25.728 { 00:21:25.728 "method": "iobuf_set_options", 00:21:25.728 "params": { 00:21:25.728 "small_pool_count": 8192, 00:21:25.728 "large_pool_count": 1024, 00:21:25.728 "small_bufsize": 8192, 00:21:25.728 "large_bufsize": 135168 00:21:25.728 } 00:21:25.728 } 00:21:25.728 ] 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "subsystem": "sock", 00:21:25.728 "config": [ 00:21:25.728 { 00:21:25.728 "method": "sock_set_default_impl", 00:21:25.728 "params": { 00:21:25.728 "impl_name": "posix" 00:21:25.728 } 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "method": "sock_impl_set_options", 00:21:25.728 "params": { 00:21:25.728 "impl_name": "ssl", 00:21:25.728 "recv_buf_size": 4096, 00:21:25.728 "send_buf_size": 4096, 00:21:25.728 "enable_recv_pipe": true, 00:21:25.728 "enable_quickack": false, 00:21:25.728 "enable_placement_id": 0, 00:21:25.728 "enable_zerocopy_send_server": true, 00:21:25.728 "enable_zerocopy_send_client": false, 00:21:25.728 "zerocopy_threshold": 0, 00:21:25.728 "tls_version": 0, 00:21:25.728 "enable_ktls": false 00:21:25.728 } 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "method": "sock_impl_set_options", 00:21:25.728 "params": { 00:21:25.728 "impl_name": "posix", 00:21:25.728 "recv_buf_size": 2097152, 00:21:25.728 "send_buf_size": 2097152, 00:21:25.728 "enable_recv_pipe": true, 00:21:25.728 "enable_quickack": false, 00:21:25.728 "enable_placement_id": 0, 00:21:25.728 "enable_zerocopy_send_server": true, 00:21:25.728 "enable_zerocopy_send_client": false, 00:21:25.728 "zerocopy_threshold": 0, 00:21:25.728 "tls_version": 0, 00:21:25.728 "enable_ktls": false 00:21:25.728 } 00:21:25.728 } 00:21:25.728 ] 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "subsystem": "vmd", 00:21:25.728 "config": [] 00:21:25.728 }, 00:21:25.728 { 00:21:25.728 "subsystem": "accel", 00:21:25.728 "config": [ 00:21:25.729 { 00:21:25.729 "method": "accel_set_options", 00:21:25.729 "params": { 00:21:25.729 "small_cache_size": 128, 00:21:25.729 "large_cache_size": 16, 00:21:25.729 "task_count": 2048, 00:21:25.729 "sequence_count": 2048, 00:21:25.729 "buf_count": 2048 00:21:25.729 } 00:21:25.729 } 00:21:25.729 ] 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "subsystem": "bdev", 00:21:25.729 "config": [ 00:21:25.729 { 00:21:25.729 "method": "bdev_set_options", 00:21:25.729 "params": { 00:21:25.729 "bdev_io_pool_size": 65535, 00:21:25.729 "bdev_io_cache_size": 256, 00:21:25.729 "bdev_auto_examine": true, 00:21:25.729 "iobuf_small_cache_size": 128, 00:21:25.729 "iobuf_large_cache_size": 16 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_raid_set_options", 00:21:25.729 "params": { 00:21:25.729 "process_window_size_kb": 1024 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_iscsi_set_options", 00:21:25.729 "params": { 00:21:25.729 "timeout_sec": 30 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_nvme_set_options", 00:21:25.729 "params": { 00:21:25.729 "action_on_timeout": "none", 00:21:25.729 "timeout_us": 0, 00:21:25.729 "timeout_admin_us": 0, 00:21:25.729 "keep_alive_timeout_ms": 10000, 00:21:25.729 "arbitration_burst": 0, 00:21:25.729 "low_priority_weight": 0, 00:21:25.729 "medium_priority_weight": 0, 00:21:25.729 "high_priority_weight": 0, 00:21:25.729 "nvme_adminq_poll_period_us": 10000, 00:21:25.729 "nvme_ioq_poll_period_us": 0, 00:21:25.729 "io_queue_requests": 512, 00:21:25.729 "delay_cmd_submit": true, 00:21:25.729 "transport_retry_count": 4, 00:21:25.729 "bdev_retry_count": 3, 00:21:25.729 "transport_ack_timeout": 0, 00:21:25.729 "ctrlr_loss_timeout_sec": 0, 00:21:25.729 "reconnect_delay_sec": 0, 00:21:25.729 "fast_io_fail_timeout_sec": 0, 00:21:25.729 "disable_auto_failback": false, 00:21:25.729 "generate_uuids": false, 00:21:25.729 "transport_tos": 0, 00:21:25.729 "nvme_error_stat": false, 00:21:25.729 "rdma_srq_size": 0, 00:21:25.729 "io_path_stat": false, 00:21:25.729 "allow_accel_sequence": false, 00:21:25.729 "rdma_max_cq_size": 0, 00:21:25.729 "rdma_cm_event_timeout_ms": 0, 00:21:25.729 "dhchap_digests": [ 00:21:25.729 "sha256", 00:21:25.729 "sha384", 00:21:25.729 "sha512" 00:21:25.729 ], 00:21:25.729 "dhchap_dhgroups": [ 00:21:25.729 "null", 00:21:25.729 "ffdhe2048", 00:21:25.729 "ffdhe3072", 00:21:25.729 "ffdhe4096", 00:21:25.729 "ffdhe6144", 00:21:25.729 "ffdhe8192" 00:21:25.729 ] 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_nvme_attach_controller", 00:21:25.729 "params": { 00:21:25.729 "name": "TLSTEST", 00:21:25.729 "trtype": "TCP", 00:21:25.729 "adrfam": "IPv4", 00:21:25.729 "traddr": "10.0.0.2", 00:21:25.729 "trsvcid": "4420", 00:21:25.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.729 "prchk_reftag": false, 00:21:25.729 "prchk_guard": false, 00:21:25.729 "ctrlr_loss_timeout_sec": 0, 00:21:25.729 "reconnect_delay_sec": 0, 00:21:25.729 "fast_io_fail_timeout_sec": 0, 00:21:25.729 "psk": "/tmp/tmp.IRRGX256Jx", 00:21:25.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.729 "hdgst": false, 00:21:25.729 "ddgst": false 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_nvme_set_hotplug", 00:21:25.729 "params": { 00:21:25.729 "period_us": 100000, 00:21:25.729 "enable": false 00:21:25.729 } 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "method": "bdev_wait_for_examine" 00:21:25.729 } 00:21:25.729 ] 00:21:25.729 }, 00:21:25.729 { 00:21:25.729 "subsystem": "nbd", 00:21:25.729 "config": [] 00:21:25.729 } 00:21:25.729 ] 00:21:25.729 }' 00:21:25.729 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.729 11:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.729 [2024-07-15 11:37:00.173743] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:25.729 [2024-07-15 11:37:00.173810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835398 ] 00:21:25.987 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.987 [2024-07-15 11:37:00.287602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.987 [2024-07-15 11:37:00.432718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.247 [2024-07-15 11:37:00.641942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.247 [2024-07-15 11:37:00.642104] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.816 11:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.816 11:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.816 11:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:26.816 Running I/O for 10 seconds... 00:21:39.023 00:21:39.023 Latency(us) 00:21:39.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.023 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.023 Verification LBA range: start 0x0 length 0x2000 00:21:39.023 TLSTESTn1 : 10.02 2805.05 10.96 0.00 0.00 45512.35 9413.35 52428.80 00:21:39.023 =================================================================================================================== 00:21:39.023 Total : 2805.05 10.96 0.00 0.00 45512.35 9413.35 52428.80 00:21:39.023 0 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2835398 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2835398 ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2835398 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835398 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835398' 00:21:39.023 killing process with pid 2835398 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2835398 00:21:39.023 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.023 00:21:39.023 Latency(us) 00:21:39.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.023 =================================================================================================================== 00:21:39.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.023 [2024-07-15 11:37:11.346425] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2835398 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2835124 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2835124 ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2835124 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835124 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835124' 00:21:39.023 killing process with pid 2835124 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2835124 00:21:39.023 [2024-07-15 11:37:11.714549] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2835124 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2837294 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2837294 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2837294 ']' 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.023 11:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.023 [2024-07-15 11:37:12.011980] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:39.023 [2024-07-15 11:37:12.012044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.023 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.023 [2024-07-15 11:37:12.100142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.023 [2024-07-15 11:37:12.186584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.023 [2024-07-15 11:37:12.186626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.023 [2024-07-15 11:37:12.186636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.023 [2024-07-15 11:37:12.186649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.023 [2024-07-15 11:37:12.186656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.023 [2024-07-15 11:37:12.186678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IRRGX256Jx 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IRRGX256Jx 00:21:39.023 11:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.023 [2024-07-15 11:37:13.216787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.023 11:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.281 11:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.281 [2024-07-15 11:37:13.714089] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.281 [2024-07-15 11:37:13.714306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.281 11:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.540 malloc0 00:21:39.540 11:37:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.798 11:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IRRGX256Jx 00:21:40.058 [2024-07-15 11:37:14.393321] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2837797 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2837797 /var/tmp/bdevperf.sock 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2837797 ']' 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.058 11:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 [2024-07-15 11:37:14.465491] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:40.058 [2024-07-15 11:37:14.465549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837797 ] 00:21:40.058 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.316 [2024-07-15 11:37:14.546738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.316 [2024-07-15 11:37:14.649448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.289 11:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.289 11:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.289 11:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IRRGX256Jx 00:21:41.289 11:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:41.548 [2024-07-15 11:37:15.910086] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.548 nvme0n1 00:21:41.807 11:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.807 Running I/O for 1 seconds... 00:21:42.742 00:21:42.742 Latency(us) 00:21:42.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.742 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:42.742 Verification LBA range: start 0x0 length 0x2000 00:21:42.742 nvme0n1 : 1.02 3564.81 13.93 0.00 0.00 35550.51 7626.01 44087.85 00:21:42.742 =================================================================================================================== 00:21:42.742 Total : 3564.81 13.93 0.00 0.00 35550.51 7626.01 44087.85 00:21:42.742 0 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2837797 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2837797 ']' 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2837797 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.742 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2837797 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2837797' 00:21:43.000 killing process with pid 2837797 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2837797 00:21:43.000 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.000 00:21:43.000 Latency(us) 00:21:43.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.000 =================================================================================================================== 00:21:43.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2837797 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2837294 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2837294 ']' 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2837294 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.000 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2837294 00:21:43.258 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:43.258 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2837294' 00:21:43.259 killing process with pid 2837294 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2837294 00:21:43.259 [2024-07-15 11:37:17.493739] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2837294 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2838335 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2838335 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2838335 ']' 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.259 11:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.517 [2024-07-15 11:37:17.768689] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:43.517 [2024-07-15 11:37:17.768762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.517 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.517 [2024-07-15 11:37:17.864072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.517 [2024-07-15 11:37:17.948874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.517 [2024-07-15 11:37:17.948918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.517 [2024-07-15 11:37:17.948928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.517 [2024-07-15 11:37:17.948938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.517 [2024-07-15 11:37:17.948945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.517 [2024-07-15 11:37:17.948972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.452 [2024-07-15 11:37:18.750746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.452 malloc0 00:21:44.452 [2024-07-15 11:37:18.779990] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.452 [2024-07-15 11:37:18.780193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2838609 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2838609 /var/tmp/bdevperf.sock 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2838609 ']' 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.452 11:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.452 [2024-07-15 11:37:18.858065] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:44.452 [2024-07-15 11:37:18.858117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838609 ] 00:21:44.452 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.711 [2024-07-15 11:37:18.938126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.711 [2024-07-15 11:37:19.038004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.647 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.647 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:45.647 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IRRGX256Jx 00:21:45.647 11:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:45.906 [2024-07-15 11:37:20.266102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.906 nvme0n1 00:21:45.906 11:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.164 Running I/O for 1 seconds... 00:21:47.101 00:21:47.101 Latency(us) 00:21:47.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.101 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:47.101 Verification LBA range: start 0x0 length 0x2000 00:21:47.101 nvme0n1 : 1.02 3586.26 14.01 0.00 0.00 35308.43 8877.15 58624.93 00:21:47.101 =================================================================================================================== 00:21:47.101 Total : 3586.26 14.01 0.00 0.00 35308.43 8877.15 58624.93 00:21:47.101 0 00:21:47.101 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:47.101 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.101 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.360 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.360 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:47.360 "subsystems": [ 00:21:47.360 { 00:21:47.360 "subsystem": "keyring", 00:21:47.360 "config": [ 00:21:47.360 { 00:21:47.360 "method": "keyring_file_add_key", 00:21:47.360 "params": { 00:21:47.360 "name": "key0", 00:21:47.360 "path": "/tmp/tmp.IRRGX256Jx" 00:21:47.360 } 00:21:47.360 } 00:21:47.360 ] 00:21:47.360 }, 00:21:47.360 { 00:21:47.360 "subsystem": "iobuf", 00:21:47.360 "config": [ 00:21:47.360 { 00:21:47.360 "method": "iobuf_set_options", 00:21:47.360 "params": { 00:21:47.360 "small_pool_count": 8192, 00:21:47.360 "large_pool_count": 1024, 00:21:47.360 "small_bufsize": 8192, 00:21:47.360 "large_bufsize": 135168 00:21:47.360 } 00:21:47.360 } 00:21:47.360 ] 00:21:47.360 }, 00:21:47.360 { 00:21:47.360 "subsystem": "sock", 00:21:47.360 "config": [ 00:21:47.360 { 00:21:47.360 "method": "sock_set_default_impl", 00:21:47.360 "params": { 00:21:47.360 "impl_name": "posix" 00:21:47.360 } 00:21:47.360 }, 00:21:47.360 { 00:21:47.360 "method": "sock_impl_set_options", 00:21:47.360 "params": { 00:21:47.360 "impl_name": "ssl", 00:21:47.360 "recv_buf_size": 4096, 00:21:47.360 "send_buf_size": 4096, 00:21:47.360 "enable_recv_pipe": true, 00:21:47.360 "enable_quickack": false, 00:21:47.360 "enable_placement_id": 0, 00:21:47.360 "enable_zerocopy_send_server": true, 00:21:47.360 "enable_zerocopy_send_client": false, 00:21:47.360 "zerocopy_threshold": 0, 00:21:47.360 "tls_version": 0, 00:21:47.360 "enable_ktls": false 00:21:47.360 } 00:21:47.360 }, 00:21:47.360 { 00:21:47.360 "method": "sock_impl_set_options", 00:21:47.360 "params": { 00:21:47.360 "impl_name": "posix", 00:21:47.360 "recv_buf_size": 2097152, 00:21:47.360 "send_buf_size": 2097152, 00:21:47.360 "enable_recv_pipe": true, 00:21:47.360 "enable_quickack": false, 00:21:47.360 "enable_placement_id": 0, 00:21:47.360 "enable_zerocopy_send_server": true, 00:21:47.360 "enable_zerocopy_send_client": false, 00:21:47.360 "zerocopy_threshold": 0, 00:21:47.360 "tls_version": 0, 00:21:47.361 "enable_ktls": false 00:21:47.361 } 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "vmd", 00:21:47.361 "config": [] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "accel", 00:21:47.361 "config": [ 00:21:47.361 { 00:21:47.361 "method": "accel_set_options", 00:21:47.361 "params": { 00:21:47.361 "small_cache_size": 128, 00:21:47.361 "large_cache_size": 16, 00:21:47.361 "task_count": 2048, 00:21:47.361 "sequence_count": 2048, 00:21:47.361 "buf_count": 2048 00:21:47.361 } 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "bdev", 00:21:47.361 "config": [ 00:21:47.361 { 00:21:47.361 "method": "bdev_set_options", 00:21:47.361 "params": { 00:21:47.361 "bdev_io_pool_size": 65535, 00:21:47.361 "bdev_io_cache_size": 256, 00:21:47.361 "bdev_auto_examine": true, 00:21:47.361 "iobuf_small_cache_size": 128, 00:21:47.361 "iobuf_large_cache_size": 16 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_raid_set_options", 00:21:47.361 "params": { 00:21:47.361 "process_window_size_kb": 1024 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_iscsi_set_options", 00:21:47.361 "params": { 00:21:47.361 "timeout_sec": 30 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_nvme_set_options", 00:21:47.361 "params": { 00:21:47.361 "action_on_timeout": "none", 00:21:47.361 "timeout_us": 0, 00:21:47.361 "timeout_admin_us": 0, 00:21:47.361 "keep_alive_timeout_ms": 10000, 00:21:47.361 "arbitration_burst": 0, 00:21:47.361 "low_priority_weight": 0, 00:21:47.361 "medium_priority_weight": 0, 00:21:47.361 "high_priority_weight": 0, 00:21:47.361 "nvme_adminq_poll_period_us": 10000, 00:21:47.361 "nvme_ioq_poll_period_us": 0, 00:21:47.361 "io_queue_requests": 0, 00:21:47.361 "delay_cmd_submit": true, 00:21:47.361 "transport_retry_count": 4, 00:21:47.361 "bdev_retry_count": 3, 00:21:47.361 "transport_ack_timeout": 0, 00:21:47.361 "ctrlr_loss_timeout_sec": 0, 00:21:47.361 "reconnect_delay_sec": 0, 00:21:47.361 "fast_io_fail_timeout_sec": 0, 00:21:47.361 "disable_auto_failback": false, 00:21:47.361 "generate_uuids": false, 00:21:47.361 "transport_tos": 0, 00:21:47.361 "nvme_error_stat": false, 00:21:47.361 "rdma_srq_size": 0, 00:21:47.361 "io_path_stat": false, 00:21:47.361 "allow_accel_sequence": false, 00:21:47.361 "rdma_max_cq_size": 0, 00:21:47.361 "rdma_cm_event_timeout_ms": 0, 00:21:47.361 "dhchap_digests": [ 00:21:47.361 "sha256", 00:21:47.361 "sha384", 00:21:47.361 "sha512" 00:21:47.361 ], 00:21:47.361 "dhchap_dhgroups": [ 00:21:47.361 "null", 00:21:47.361 "ffdhe2048", 00:21:47.361 "ffdhe3072", 00:21:47.361 "ffdhe4096", 00:21:47.361 "ffdhe6144", 00:21:47.361 "ffdhe8192" 00:21:47.361 ] 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_nvme_set_hotplug", 00:21:47.361 "params": { 00:21:47.361 "period_us": 100000, 00:21:47.361 "enable": false 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_malloc_create", 00:21:47.361 "params": { 00:21:47.361 "name": "malloc0", 00:21:47.361 "num_blocks": 8192, 00:21:47.361 "block_size": 4096, 00:21:47.361 "physical_block_size": 4096, 00:21:47.361 "uuid": "9294731d-9d7b-43c8-afea-aa684768dac6", 00:21:47.361 "optimal_io_boundary": 0 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "bdev_wait_for_examine" 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "nbd", 00:21:47.361 "config": [] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "scheduler", 00:21:47.361 "config": [ 00:21:47.361 { 00:21:47.361 "method": "framework_set_scheduler", 00:21:47.361 "params": { 00:21:47.361 "name": "static" 00:21:47.361 } 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "subsystem": "nvmf", 00:21:47.361 "config": [ 00:21:47.361 { 00:21:47.361 "method": "nvmf_set_config", 00:21:47.361 "params": { 00:21:47.361 "discovery_filter": "match_any", 00:21:47.361 "admin_cmd_passthru": { 00:21:47.361 "identify_ctrlr": false 00:21:47.361 } 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_set_max_subsystems", 00:21:47.361 "params": { 00:21:47.361 "max_subsystems": 1024 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_set_crdt", 00:21:47.361 "params": { 00:21:47.361 "crdt1": 0, 00:21:47.361 "crdt2": 0, 00:21:47.361 "crdt3": 0 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_create_transport", 00:21:47.361 "params": { 00:21:47.361 "trtype": "TCP", 00:21:47.361 "max_queue_depth": 128, 00:21:47.361 "max_io_qpairs_per_ctrlr": 127, 00:21:47.361 "in_capsule_data_size": 4096, 00:21:47.361 "max_io_size": 131072, 00:21:47.361 "io_unit_size": 131072, 00:21:47.361 "max_aq_depth": 128, 00:21:47.361 "num_shared_buffers": 511, 00:21:47.361 "buf_cache_size": 4294967295, 00:21:47.361 "dif_insert_or_strip": false, 00:21:47.361 "zcopy": false, 00:21:47.361 "c2h_success": false, 00:21:47.361 "sock_priority": 0, 00:21:47.361 "abort_timeout_sec": 1, 00:21:47.361 "ack_timeout": 0, 00:21:47.361 "data_wr_pool_size": 0 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_create_subsystem", 00:21:47.361 "params": { 00:21:47.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.361 "allow_any_host": false, 00:21:47.361 "serial_number": "00000000000000000000", 00:21:47.361 "model_number": "SPDK bdev Controller", 00:21:47.361 "max_namespaces": 32, 00:21:47.361 "min_cntlid": 1, 00:21:47.361 "max_cntlid": 65519, 00:21:47.361 "ana_reporting": false 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_subsystem_add_host", 00:21:47.361 "params": { 00:21:47.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.361 "host": "nqn.2016-06.io.spdk:host1", 00:21:47.361 "psk": "key0" 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_subsystem_add_ns", 00:21:47.361 "params": { 00:21:47.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.361 "namespace": { 00:21:47.361 "nsid": 1, 00:21:47.361 "bdev_name": "malloc0", 00:21:47.361 "nguid": "9294731D9D7B43C8AFEAAA684768DAC6", 00:21:47.361 "uuid": "9294731d-9d7b-43c8-afea-aa684768dac6", 00:21:47.361 "no_auto_visible": false 00:21:47.361 } 00:21:47.361 } 00:21:47.361 }, 00:21:47.361 { 00:21:47.361 "method": "nvmf_subsystem_add_listener", 00:21:47.361 "params": { 00:21:47.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.361 "listen_address": { 00:21:47.361 "trtype": "TCP", 00:21:47.361 "adrfam": "IPv4", 00:21:47.361 "traddr": "10.0.0.2", 00:21:47.361 "trsvcid": "4420" 00:21:47.361 }, 00:21:47.361 "secure_channel": true 00:21:47.361 } 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 } 00:21:47.361 ] 00:21:47.361 }' 00:21:47.361 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:47.620 "subsystems": [ 00:21:47.620 { 00:21:47.620 "subsystem": "keyring", 00:21:47.620 "config": [ 00:21:47.620 { 00:21:47.620 "method": "keyring_file_add_key", 00:21:47.620 "params": { 00:21:47.620 "name": "key0", 00:21:47.620 "path": "/tmp/tmp.IRRGX256Jx" 00:21:47.620 } 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "iobuf", 00:21:47.620 "config": [ 00:21:47.620 { 00:21:47.620 "method": "iobuf_set_options", 00:21:47.620 "params": { 00:21:47.620 "small_pool_count": 8192, 00:21:47.620 "large_pool_count": 1024, 00:21:47.620 "small_bufsize": 8192, 00:21:47.620 "large_bufsize": 135168 00:21:47.620 } 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "sock", 00:21:47.620 "config": [ 00:21:47.620 { 00:21:47.620 "method": "sock_set_default_impl", 00:21:47.620 "params": { 00:21:47.620 "impl_name": "posix" 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "sock_impl_set_options", 00:21:47.620 "params": { 00:21:47.620 "impl_name": "ssl", 00:21:47.620 "recv_buf_size": 4096, 00:21:47.620 "send_buf_size": 4096, 00:21:47.620 "enable_recv_pipe": true, 00:21:47.620 "enable_quickack": false, 00:21:47.620 "enable_placement_id": 0, 00:21:47.620 "enable_zerocopy_send_server": true, 00:21:47.620 "enable_zerocopy_send_client": false, 00:21:47.620 "zerocopy_threshold": 0, 00:21:47.620 "tls_version": 0, 00:21:47.620 "enable_ktls": false 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "sock_impl_set_options", 00:21:47.620 "params": { 00:21:47.620 "impl_name": "posix", 00:21:47.620 "recv_buf_size": 2097152, 00:21:47.620 "send_buf_size": 2097152, 00:21:47.620 "enable_recv_pipe": true, 00:21:47.620 "enable_quickack": false, 00:21:47.620 "enable_placement_id": 0, 00:21:47.620 "enable_zerocopy_send_server": true, 00:21:47.620 "enable_zerocopy_send_client": false, 00:21:47.620 "zerocopy_threshold": 0, 00:21:47.620 "tls_version": 0, 00:21:47.620 "enable_ktls": false 00:21:47.620 } 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "vmd", 00:21:47.620 "config": [] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "accel", 00:21:47.620 "config": [ 00:21:47.620 { 00:21:47.620 "method": "accel_set_options", 00:21:47.620 "params": { 00:21:47.620 "small_cache_size": 128, 00:21:47.620 "large_cache_size": 16, 00:21:47.620 "task_count": 2048, 00:21:47.620 "sequence_count": 2048, 00:21:47.620 "buf_count": 2048 00:21:47.620 } 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "bdev", 00:21:47.620 "config": [ 00:21:47.620 { 00:21:47.620 "method": "bdev_set_options", 00:21:47.620 "params": { 00:21:47.620 "bdev_io_pool_size": 65535, 00:21:47.620 "bdev_io_cache_size": 256, 00:21:47.620 "bdev_auto_examine": true, 00:21:47.620 "iobuf_small_cache_size": 128, 00:21:47.620 "iobuf_large_cache_size": 16 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_raid_set_options", 00:21:47.620 "params": { 00:21:47.620 "process_window_size_kb": 1024 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_iscsi_set_options", 00:21:47.620 "params": { 00:21:47.620 "timeout_sec": 30 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_nvme_set_options", 00:21:47.620 "params": { 00:21:47.620 "action_on_timeout": "none", 00:21:47.620 "timeout_us": 0, 00:21:47.620 "timeout_admin_us": 0, 00:21:47.620 "keep_alive_timeout_ms": 10000, 00:21:47.620 "arbitration_burst": 0, 00:21:47.620 "low_priority_weight": 0, 00:21:47.620 "medium_priority_weight": 0, 00:21:47.620 "high_priority_weight": 0, 00:21:47.620 "nvme_adminq_poll_period_us": 10000, 00:21:47.620 "nvme_ioq_poll_period_us": 0, 00:21:47.620 "io_queue_requests": 512, 00:21:47.620 "delay_cmd_submit": true, 00:21:47.620 "transport_retry_count": 4, 00:21:47.620 "bdev_retry_count": 3, 00:21:47.620 "transport_ack_timeout": 0, 00:21:47.620 "ctrlr_loss_timeout_sec": 0, 00:21:47.620 "reconnect_delay_sec": 0, 00:21:47.620 "fast_io_fail_timeout_sec": 0, 00:21:47.620 "disable_auto_failback": false, 00:21:47.620 "generate_uuids": false, 00:21:47.620 "transport_tos": 0, 00:21:47.620 "nvme_error_stat": false, 00:21:47.620 "rdma_srq_size": 0, 00:21:47.620 "io_path_stat": false, 00:21:47.620 "allow_accel_sequence": false, 00:21:47.620 "rdma_max_cq_size": 0, 00:21:47.620 "rdma_cm_event_timeout_ms": 0, 00:21:47.620 "dhchap_digests": [ 00:21:47.620 "sha256", 00:21:47.620 "sha384", 00:21:47.620 "sha512" 00:21:47.620 ], 00:21:47.620 "dhchap_dhgroups": [ 00:21:47.620 "null", 00:21:47.620 "ffdhe2048", 00:21:47.620 "ffdhe3072", 00:21:47.620 "ffdhe4096", 00:21:47.620 "ffdhe6144", 00:21:47.620 "ffdhe8192" 00:21:47.620 ] 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_nvme_attach_controller", 00:21:47.620 "params": { 00:21:47.620 "name": "nvme0", 00:21:47.620 "trtype": "TCP", 00:21:47.620 "adrfam": "IPv4", 00:21:47.620 "traddr": "10.0.0.2", 00:21:47.620 "trsvcid": "4420", 00:21:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.620 "prchk_reftag": false, 00:21:47.620 "prchk_guard": false, 00:21:47.620 "ctrlr_loss_timeout_sec": 0, 00:21:47.620 "reconnect_delay_sec": 0, 00:21:47.620 "fast_io_fail_timeout_sec": 0, 00:21:47.620 "psk": "key0", 00:21:47.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.620 "hdgst": false, 00:21:47.620 "ddgst": false 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_nvme_set_hotplug", 00:21:47.620 "params": { 00:21:47.620 "period_us": 100000, 00:21:47.620 "enable": false 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_enable_histogram", 00:21:47.620 "params": { 00:21:47.620 "name": "nvme0n1", 00:21:47.620 "enable": true 00:21:47.620 } 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "method": "bdev_wait_for_examine" 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }, 00:21:47.620 { 00:21:47.620 "subsystem": "nbd", 00:21:47.620 "config": [] 00:21:47.620 } 00:21:47.620 ] 00:21:47.620 }' 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2838609 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2838609 ']' 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2838609 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2838609 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:47.620 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2838609' 00:21:47.620 killing process with pid 2838609 00:21:47.620 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2838609 00:21:47.620 Received shutdown signal, test time was about 1.000000 seconds 00:21:47.620 00:21:47.620 Latency(us) 00:21:47.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.620 =================================================================================================================== 00:21:47.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.620 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2838609 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2838335 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2838335 ']' 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2838335 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2838335 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2838335' 00:21:47.878 killing process with pid 2838335 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2838335 00:21:47.878 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2838335 00:21:48.138 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:48.138 11:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.138 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.138 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:48.138 "subsystems": [ 00:21:48.138 { 00:21:48.138 "subsystem": "keyring", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "keyring_file_add_key", 00:21:48.138 "params": { 00:21:48.138 "name": "key0", 00:21:48.138 "path": "/tmp/tmp.IRRGX256Jx" 00:21:48.138 } 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "iobuf", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "iobuf_set_options", 00:21:48.138 "params": { 00:21:48.138 "small_pool_count": 8192, 00:21:48.138 "large_pool_count": 1024, 00:21:48.138 "small_bufsize": 8192, 00:21:48.138 "large_bufsize": 135168 00:21:48.138 } 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "sock", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "sock_set_default_impl", 00:21:48.138 "params": { 00:21:48.138 "impl_name": "posix" 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "sock_impl_set_options", 00:21:48.138 "params": { 00:21:48.138 "impl_name": "ssl", 00:21:48.138 "recv_buf_size": 4096, 00:21:48.138 "send_buf_size": 4096, 00:21:48.138 "enable_recv_pipe": true, 00:21:48.138 "enable_quickack": false, 00:21:48.138 "enable_placement_id": 0, 00:21:48.138 "enable_zerocopy_send_server": true, 00:21:48.138 "enable_zerocopy_send_client": false, 00:21:48.138 "zerocopy_threshold": 0, 00:21:48.138 "tls_version": 0, 00:21:48.138 "enable_ktls": false 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "sock_impl_set_options", 00:21:48.138 "params": { 00:21:48.138 "impl_name": "posix", 00:21:48.138 "recv_buf_size": 2097152, 00:21:48.138 "send_buf_size": 2097152, 00:21:48.138 "enable_recv_pipe": true, 00:21:48.138 "enable_quickack": false, 00:21:48.138 "enable_placement_id": 0, 00:21:48.138 "enable_zerocopy_send_server": true, 00:21:48.138 "enable_zerocopy_send_client": false, 00:21:48.138 "zerocopy_threshold": 0, 00:21:48.138 "tls_version": 0, 00:21:48.138 "enable_ktls": false 00:21:48.138 } 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "vmd", 00:21:48.138 "config": [] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "accel", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "accel_set_options", 00:21:48.138 "params": { 00:21:48.138 "small_cache_size": 128, 00:21:48.138 "large_cache_size": 16, 00:21:48.138 "task_count": 2048, 00:21:48.138 "sequence_count": 2048, 00:21:48.138 "buf_count": 2048 00:21:48.138 } 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "bdev", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "bdev_set_options", 00:21:48.138 "params": { 00:21:48.138 "bdev_io_pool_size": 65535, 00:21:48.138 "bdev_io_cache_size": 256, 00:21:48.138 "bdev_auto_examine": true, 00:21:48.138 "iobuf_small_cache_size": 128, 00:21:48.138 "iobuf_large_cache_size": 16 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_raid_set_options", 00:21:48.138 "params": { 00:21:48.138 "process_window_size_kb": 1024 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_iscsi_set_options", 00:21:48.138 "params": { 00:21:48.138 "timeout_sec": 30 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_nvme_set_options", 00:21:48.138 "params": { 00:21:48.138 "action_on_timeout": "none", 00:21:48.138 "timeout_us": 0, 00:21:48.138 "timeout_admin_us": 0, 00:21:48.138 "keep_alive_timeout_ms": 10000, 00:21:48.138 "arbitration_burst": 0, 00:21:48.138 "low_priority_weight": 0, 00:21:48.138 "medium_priority_weight": 0, 00:21:48.138 "high_priority_weight": 0, 00:21:48.138 "nvme_adminq_poll_period_us": 10000, 00:21:48.138 "nvme_ioq_poll_period_us": 0, 00:21:48.138 "io_queue_requests": 0, 00:21:48.138 "delay_cmd_submit": true, 00:21:48.138 "transport_retry_count": 4, 00:21:48.138 "bdev_retry_count": 3, 00:21:48.138 "transport_ack_timeout": 0, 00:21:48.138 "ctrlr_loss_timeout_sec": 0, 00:21:48.138 "reconnect_delay_sec": 0, 00:21:48.138 "fast_io_fail_timeout_sec": 0, 00:21:48.138 "disable_auto_failback": false, 00:21:48.138 "generate_uuids": false, 00:21:48.138 "transport_tos": 0, 00:21:48.138 "nvme_error_stat": false, 00:21:48.138 "rdma_srq_size": 0, 00:21:48.138 "io_path_stat": false, 00:21:48.138 "allow_accel_sequence": false, 00:21:48.138 "rdma_max_cq_size": 0, 00:21:48.138 "rdma_cm_event_timeout_ms": 0, 00:21:48.138 "dhchap_digests": [ 00:21:48.138 "sha256", 00:21:48.138 "sha384", 00:21:48.138 "sha512" 00:21:48.138 ], 00:21:48.138 "dhchap_dhgroups": [ 00:21:48.138 "null", 00:21:48.138 "ffdhe2048", 00:21:48.138 "ffdhe3072", 00:21:48.138 "ffdhe4096", 00:21:48.138 "ffdhe6144", 00:21:48.138 "ffdhe8192" 00:21:48.138 ] 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_nvme_set_hotplug", 00:21:48.138 "params": { 00:21:48.138 "period_us": 100000, 00:21:48.138 "enable": false 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_malloc_create", 00:21:48.138 "params": { 00:21:48.138 "name": "malloc0", 00:21:48.138 "num_blocks": 8192, 00:21:48.138 "block_size": 4096, 00:21:48.138 "physical_block_size": 4096, 00:21:48.138 "uuid": "9294731d-9d7b-43c8-afea-aa684768dac6", 00:21:48.138 "optimal_io_boundary": 0 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "bdev_wait_for_examine" 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "nbd", 00:21:48.138 "config": [] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "scheduler", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "framework_set_scheduler", 00:21:48.138 "params": { 00:21:48.138 "name": "static" 00:21:48.138 } 00:21:48.138 } 00:21:48.138 ] 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "subsystem": "nvmf", 00:21:48.138 "config": [ 00:21:48.138 { 00:21:48.138 "method": "nvmf_set_config", 00:21:48.138 "params": { 00:21:48.138 "discovery_filter": "match_any", 00:21:48.138 "admin_cmd_passthru": { 00:21:48.138 "identify_ctrlr": false 00:21:48.138 } 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "nvmf_set_max_subsystems", 00:21:48.138 "params": { 00:21:48.138 "max_subsystems": 1024 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "nvmf_set_crdt", 00:21:48.138 "params": { 00:21:48.138 "crdt1": 0, 00:21:48.138 "crdt2": 0, 00:21:48.138 "crdt3": 0 00:21:48.138 } 00:21:48.138 }, 00:21:48.138 { 00:21:48.138 "method": "nvmf_create_transport", 00:21:48.138 "params": { 00:21:48.138 "trtype": "TCP", 00:21:48.138 "max_queue_depth": 128, 00:21:48.138 "max_io_qpairs_per_ctrlr": 127, 00:21:48.139 "in_capsule_data_size": 4096, 00:21:48.139 "max_io_size": 131072, 00:21:48.139 "io_unit_size": 131072, 00:21:48.139 "max_aq_depth": 128, 00:21:48.139 "num_shared_buffers": 511, 00:21:48.139 "buf_cache_size": 4294967295, 00:21:48.139 "dif_insert_or_strip": false, 00:21:48.139 "zcopy": false, 00:21:48.139 "c2h_success": false, 00:21:48.139 "sock_priority": 0, 00:21:48.139 "abort_timeout_sec": 1, 00:21:48.139 "ack_timeout": 0, 00:21:48.139 "data_wr_pool_size": 0 00:21:48.139 } 00:21:48.139 }, 00:21:48.139 { 00:21:48.139 "method": "nvmf_create_subsystem", 00:21:48.139 "params": { 00:21:48.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.139 "allow_any_host": false, 00:21:48.139 "serial_number": "00000000000000000000", 00:21:48.139 "model_number": "SPDK bdev Controller", 00:21:48.139 "max_namespaces": 32, 00:21:48.139 "min_cntlid": 1, 00:21:48.139 "max_cntlid": 65519, 00:21:48.139 "ana_reporting": false 00:21:48.139 } 00:21:48.139 }, 00:21:48.139 { 00:21:48.139 "method": "nvmf_subsystem_add_host", 00:21:48.139 "params": { 00:21:48.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.139 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.139 "psk": "key0" 00:21:48.139 } 00:21:48.139 }, 00:21:48.139 { 00:21:48.139 "method": "nvmf_subsystem_add_ns", 00:21:48.139 "params": { 00:21:48.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.139 "namespace": { 00:21:48.139 "nsid": 1, 00:21:48.139 "bdev_name": "malloc0", 00:21:48.139 "nguid": "9294731D9D7B43C8AFEAAA684768DAC6", 00:21:48.139 "uuid": "9294731d-9d7b-43c8-afea-aa684768dac6", 00:21:48.139 "no_auto_visible": false 00:21:48.139 } 00:21:48.139 } 00:21:48.139 }, 00:21:48.139 { 00:21:48.139 "method": "nvmf_subsystem_add_listener", 00:21:48.139 "params": { 00:21:48.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.139 "listen_address": { 00:21:48.139 "trtype": "TCP", 00:21:48.139 "adrfam": "IPv4", 00:21:48.139 "traddr": "10.0.0.2", 00:21:48.139 "trsvcid": "4420" 00:21:48.139 }, 00:21:48.139 "secure_channel": true 00:21:48.139 } 00:21:48.139 } 00:21:48.139 ] 00:21:48.139 } 00:21:48.139 ] 00:21:48.139 }' 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2839207 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2839207 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2839207 ']' 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.139 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.139 [2024-07-15 11:37:22.542154] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:48.139 [2024-07-15 11:37:22.542215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.139 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.397 [2024-07-15 11:37:22.629633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.397 [2024-07-15 11:37:22.715566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.397 [2024-07-15 11:37:22.715611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.397 [2024-07-15 11:37:22.715622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.397 [2024-07-15 11:37:22.715630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.397 [2024-07-15 11:37:22.715638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.397 [2024-07-15 11:37:22.715706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.654 [2024-07-15 11:37:22.935332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.654 [2024-07-15 11:37:22.967325] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.654 [2024-07-15 11:37:22.980602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2839437 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2839437 /var/tmp/bdevperf.sock 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2839437 ']' 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.220 11:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:49.220 "subsystems": [ 00:21:49.220 { 00:21:49.220 "subsystem": "keyring", 00:21:49.220 "config": [ 00:21:49.220 { 00:21:49.220 "method": "keyring_file_add_key", 00:21:49.220 "params": { 00:21:49.220 "name": "key0", 00:21:49.220 "path": "/tmp/tmp.IRRGX256Jx" 00:21:49.220 } 00:21:49.220 } 00:21:49.220 ] 00:21:49.220 }, 00:21:49.220 { 00:21:49.220 "subsystem": "iobuf", 00:21:49.220 "config": [ 00:21:49.220 { 00:21:49.220 "method": "iobuf_set_options", 00:21:49.220 "params": { 00:21:49.220 "small_pool_count": 8192, 00:21:49.220 "large_pool_count": 1024, 00:21:49.220 "small_bufsize": 8192, 00:21:49.220 "large_bufsize": 135168 00:21:49.220 } 00:21:49.220 } 00:21:49.220 ] 00:21:49.220 }, 00:21:49.220 { 00:21:49.220 "subsystem": "sock", 00:21:49.220 "config": [ 00:21:49.220 { 00:21:49.220 "method": "sock_set_default_impl", 00:21:49.220 "params": { 00:21:49.220 "impl_name": "posix" 00:21:49.220 } 00:21:49.220 }, 00:21:49.220 { 00:21:49.220 "method": "sock_impl_set_options", 00:21:49.220 "params": { 00:21:49.220 "impl_name": "ssl", 00:21:49.220 "recv_buf_size": 4096, 00:21:49.221 "send_buf_size": 4096, 00:21:49.221 "enable_recv_pipe": true, 00:21:49.221 "enable_quickack": false, 00:21:49.221 "enable_placement_id": 0, 00:21:49.221 "enable_zerocopy_send_server": true, 00:21:49.221 "enable_zerocopy_send_client": false, 00:21:49.221 "zerocopy_threshold": 0, 00:21:49.221 "tls_version": 0, 00:21:49.221 "enable_ktls": false 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "sock_impl_set_options", 00:21:49.221 "params": { 00:21:49.221 "impl_name": "posix", 00:21:49.221 "recv_buf_size": 2097152, 00:21:49.221 "send_buf_size": 2097152, 00:21:49.221 "enable_recv_pipe": true, 00:21:49.221 "enable_quickack": false, 00:21:49.221 "enable_placement_id": 0, 00:21:49.221 "enable_zerocopy_send_server": true, 00:21:49.221 "enable_zerocopy_send_client": false, 00:21:49.221 "zerocopy_threshold": 0, 00:21:49.221 "tls_version": 0, 00:21:49.221 "enable_ktls": false 00:21:49.221 } 00:21:49.221 } 00:21:49.221 ] 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "subsystem": "vmd", 00:21:49.221 "config": [] 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "subsystem": "accel", 00:21:49.221 "config": [ 00:21:49.221 { 00:21:49.221 "method": "accel_set_options", 00:21:49.221 "params": { 00:21:49.221 "small_cache_size": 128, 00:21:49.221 "large_cache_size": 16, 00:21:49.221 "task_count": 2048, 00:21:49.221 "sequence_count": 2048, 00:21:49.221 "buf_count": 2048 00:21:49.221 } 00:21:49.221 } 00:21:49.221 ] 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "subsystem": "bdev", 00:21:49.221 "config": [ 00:21:49.221 { 00:21:49.221 "method": "bdev_set_options", 00:21:49.221 "params": { 00:21:49.221 "bdev_io_pool_size": 65535, 00:21:49.221 "bdev_io_cache_size": 256, 00:21:49.221 "bdev_auto_examine": true, 00:21:49.221 "iobuf_small_cache_size": 128, 00:21:49.221 "iobuf_large_cache_size": 16 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_raid_set_options", 00:21:49.221 "params": { 00:21:49.221 "process_window_size_kb": 1024 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_iscsi_set_options", 00:21:49.221 "params": { 00:21:49.221 "timeout_sec": 30 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_nvme_set_options", 00:21:49.221 "params": { 00:21:49.221 "action_on_timeout": "none", 00:21:49.221 "timeout_us": 0, 00:21:49.221 "timeout_admin_us": 0, 00:21:49.221 "keep_alive_timeout_ms": 10000, 00:21:49.221 "arbitration_burst": 0, 00:21:49.221 "low_priority_weight": 0, 00:21:49.221 "medium_priority_weight": 0, 00:21:49.221 "high_priority_weight": 0, 00:21:49.221 "nvme_adminq_poll_period_us": 10000, 00:21:49.221 "nvme_ioq_poll_period_us": 0, 00:21:49.221 "io_queue_requests": 512, 00:21:49.221 "delay_cmd_submit": true, 00:21:49.221 "transport_retry_count": 4, 00:21:49.221 "bdev_retry_count": 3, 00:21:49.221 "transport_ack_timeout": 0, 00:21:49.221 "ctrlr_loss_timeout_sec": 0, 00:21:49.221 "reconnect_delay_sec": 0, 00:21:49.221 "fast_io_fail_timeout_sec": 0, 00:21:49.221 "disable_auto_failback": false, 00:21:49.221 "generate_uuids": false, 00:21:49.221 "transport_tos": 0, 00:21:49.221 "nvme_error_stat": false, 00:21:49.221 "rdma_srq_size": 0, 00:21:49.221 "io_path_stat": false, 00:21:49.221 "allow_accel_sequence": false, 00:21:49.221 "rdma_max_cq_size": 0, 00:21:49.221 "rdma_cm_event_timeout_ms": 0, 00:21:49.221 "dhchap_digests": [ 00:21:49.221 "sha256", 00:21:49.221 "sha384", 00:21:49.221 "sha512" 00:21:49.221 ], 00:21:49.221 "dhchap_dhgroups": [ 00:21:49.221 "null", 00:21:49.221 "ffdhe2048", 00:21:49.221 "ffdhe3072", 00:21:49.221 "ffdhe4096", 00:21:49.221 "ffdhe6144", 00:21:49.221 "ffdhe8192" 00:21:49.221 ] 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_nvme_attach_controller", 00:21:49.221 "params": { 00:21:49.221 "name": "nvme0", 00:21:49.221 "trtype": "TCP", 00:21:49.221 "adrfam": "IPv4", 00:21:49.221 "traddr": "10.0.0.2", 00:21:49.221 "trsvcid": "4420", 00:21:49.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.221 "prchk_reftag": false, 00:21:49.221 "prchk_guard": false, 00:21:49.221 "ctrlr_loss_timeout_sec": 0, 00:21:49.221 "reconnect_delay_sec": 0, 00:21:49.221 "fast_io_fail_timeout_sec": 0, 00:21:49.221 "psk": "key0", 00:21:49.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.221 "hdgst": false, 00:21:49.221 "ddgst": false 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_nvme_set_hotplug", 00:21:49.221 "params": { 00:21:49.221 "period_us": 100000, 00:21:49.221 "enable": false 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_enable_histogram", 00:21:49.221 "params": { 00:21:49.221 "name": "nvme0n1", 00:21:49.221 "enable": true 00:21:49.221 } 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "method": "bdev_wait_for_examine" 00:21:49.221 } 00:21:49.221 ] 00:21:49.221 }, 00:21:49.221 { 00:21:49.221 "subsystem": "nbd", 00:21:49.221 "config": [] 00:21:49.221 } 00:21:49.221 ] 00:21:49.221 }' 00:21:49.221 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.221 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.221 [2024-07-15 11:37:23.572102] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:21:49.221 [2024-07-15 11:37:23.572164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839437 ] 00:21:49.221 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.221 [2024-07-15 11:37:23.654549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.480 [2024-07-15 11:37:23.756354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.480 [2024-07-15 11:37:23.920407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.047 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.047 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:50.047 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.047 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:50.306 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.306 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.565 Running I/O for 1 seconds... 00:21:51.509 00:21:51.509 Latency(us) 00:21:51.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.509 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:51.509 Verification LBA range: start 0x0 length 0x2000 00:21:51.509 nvme0n1 : 1.03 3710.65 14.49 0.00 0.00 34051.10 9353.77 31218.97 00:21:51.509 =================================================================================================================== 00:21:51.509 Total : 3710.65 14.49 0.00 0.00 34051.10 9353.77 31218.97 00:21:51.509 0 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:51.509 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:51.509 nvmf_trace.0 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2839437 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2839437 ']' 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2839437 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.807 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839437 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839437' 00:21:51.807 killing process with pid 2839437 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2839437 00:21:51.807 Received shutdown signal, test time was about 1.000000 seconds 00:21:51.807 00:21:51.807 Latency(us) 00:21:51.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.807 =================================================================================================================== 00:21:51.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2839437 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.807 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.094 rmmod nvme_tcp 00:21:52.094 rmmod nvme_fabrics 00:21:52.094 rmmod nvme_keyring 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2839207 ']' 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2839207 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2839207 ']' 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2839207 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839207 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839207' 00:21:52.094 killing process with pid 2839207 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2839207 00:21:52.094 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2839207 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.353 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.256 11:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.256 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.nbtjDLiMkJ /tmp/tmp.wJdUbDkPsF /tmp/tmp.IRRGX256Jx 00:21:54.256 00:21:54.256 real 1m35.162s 00:21:54.256 user 2m35.018s 00:21:54.256 sys 0m27.668s 00:21:54.256 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.257 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.257 ************************************ 00:21:54.257 END TEST nvmf_tls 00:21:54.257 ************************************ 00:21:54.257 11:37:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:54.257 11:37:28 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:54.257 11:37:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:54.257 11:37:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.257 11:37:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.257 ************************************ 00:21:54.257 START TEST nvmf_fips 00:21:54.257 ************************************ 00:21:54.257 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:54.516 * Looking for test storage... 00:21:54.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:54.516 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:54.517 11:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:54.775 Error setting digest 00:21:54.775 00827C18457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:54.775 00827C18457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.775 11:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.332 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:01.333 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:01.333 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:01.333 Found net devices under 0000:af:00.0: cvl_0_0 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:01.333 Found net devices under 0000:af:00.1: cvl_0_1 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:01.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:01.333 00:22:01.333 --- 10.0.0.2 ping statistics --- 00:22:01.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.333 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:22:01.333 00:22:01.333 --- 10.0.0.1 ping statistics --- 00:22:01.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.333 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2843631 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2843631 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2843631 ']' 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.333 11:37:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.333 [2024-07-15 11:37:34.927417] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:22:01.333 [2024-07-15 11:37:34.927483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.333 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.333 [2024-07-15 11:37:35.014658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.333 [2024-07-15 11:37:35.118803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.333 [2024-07-15 11:37:35.118846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.333 [2024-07-15 11:37:35.118859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.333 [2024-07-15 11:37:35.118870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.333 [2024-07-15 11:37:35.118879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.333 [2024-07-15 11:37:35.118904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.592 11:37:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.159 [2024-07-15 11:37:36.349526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.159 [2024-07-15 11:37:36.365489] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.159 [2024-07-15 11:37:36.365713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.159 [2024-07-15 11:37:36.395809] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.159 malloc0 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2843957 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2843957 /var/tmp/bdevperf.sock 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2843957 ']' 00:22:02.159 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.160 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.160 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.160 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.160 11:37:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:02.160 [2024-07-15 11:37:36.504360] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:22:02.160 [2024-07-15 11:37:36.504420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843957 ] 00:22:02.160 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.160 [2024-07-15 11:37:36.614949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.418 [2024-07-15 11:37:36.762635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.986 11:37:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.986 11:37:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:02.986 11:37:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:03.245 [2024-07-15 11:37:37.578380] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.245 [2024-07-15 11:37:37.578546] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.245 TLSTESTn1 00:22:03.245 11:37:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:03.503 Running I/O for 10 seconds... 00:22:13.477 00:22:13.477 Latency(us) 00:22:13.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.477 Verification LBA range: start 0x0 length 0x2000 00:22:13.477 TLSTESTn1 : 10.03 2818.72 11.01 0.00 0.00 45278.58 11975.21 43372.92 00:22:13.477 =================================================================================================================== 00:22:13.477 Total : 2818.72 11.01 0.00 0.00 45278.58 11975.21 43372.92 00:22:13.477 0 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:13.477 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:13.477 nvmf_trace.0 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2843957 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2843957 ']' 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2843957 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.736 11:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843957 00:22:13.736 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:13.736 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:13.736 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843957' 00:22:13.736 killing process with pid 2843957 00:22:13.736 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2843957 00:22:13.736 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.736 00:22:13.736 Latency(us) 00:22:13.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.736 =================================================================================================================== 00:22:13.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.736 [2024-07-15 11:37:48.033396] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:13.736 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2843957 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.995 rmmod nvme_tcp 00:22:13.995 rmmod nvme_fabrics 00:22:13.995 rmmod nvme_keyring 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2843631 ']' 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2843631 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2843631 ']' 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2843631 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.995 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843631 00:22:14.254 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.254 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.254 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843631' 00:22:14.254 killing process with pid 2843631 00:22:14.254 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2843631 00:22:14.254 [2024-07-15 11:37:48.500134] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:14.254 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2843631 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.512 11:37:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.415 11:37:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.415 11:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:16.415 00:22:16.415 real 0m22.098s 00:22:16.415 user 0m25.262s 00:22:16.415 sys 0m8.623s 00:22:16.415 11:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.415 11:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:16.415 ************************************ 00:22:16.415 END TEST nvmf_fips 00:22:16.415 ************************************ 00:22:16.415 11:37:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:16.415 11:37:50 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:16.415 11:37:50 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:16.415 11:37:50 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:16.415 11:37:50 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:16.415 11:37:50 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.415 11:37:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:22.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:22.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:22.979 Found net devices under 0000:af:00.0: cvl_0_0 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:22.979 Found net devices under 0000:af:00.1: cvl_0_1 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:22.979 11:37:56 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.979 11:37:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.979 11:37:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.979 11:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.979 ************************************ 00:22:22.979 START TEST nvmf_perf_adq 00:22:22.979 ************************************ 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.979 * Looking for test storage... 00:22:22.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.979 11:37:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.980 11:37:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:28.249 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:28.249 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:28.249 Found net devices under 0000:af:00.0: cvl_0_0 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:28.249 Found net devices under 0000:af:00.1: cvl_0_1 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:28.249 11:38:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:29.183 11:38:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:31.090 11:38:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:36.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:36.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:36.364 Found net devices under 0000:af:00.0: cvl_0_0 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:36.364 Found net devices under 0000:af:00.1: cvl_0_1 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.364 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:22:36.364 00:22:36.364 --- 10.0.0.2 ping statistics --- 00:22:36.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.364 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:36.365 00:22:36.365 --- 10.0.0.1 ping statistics --- 00:22:36.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.365 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2854234 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2854234 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2854234 ']' 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.365 11:38:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.365 [2024-07-15 11:38:10.779315] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:22:36.365 [2024-07-15 11:38:10.779374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.635 [2024-07-15 11:38:10.865925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.636 [2024-07-15 11:38:10.957932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.636 [2024-07-15 11:38:10.957976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.636 [2024-07-15 11:38:10.957986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.636 [2024-07-15 11:38:10.957995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.636 [2024-07-15 11:38:10.958003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.636 [2024-07-15 11:38:10.958058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.636 [2024-07-15 11:38:10.958170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.636 [2024-07-15 11:38:10.958308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.636 [2024-07-15 11:38:10.958310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 [2024-07-15 11:38:11.916238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 Malloc1 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.569 [2024-07-15 11:38:11.972192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2854522 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:37.569 11:38:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:37.569 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.117 11:38:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:40.117 11:38:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.117 11:38:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:40.117 "tick_rate": 2200000000, 00:22:40.117 "poll_groups": [ 00:22:40.117 { 00:22:40.117 "name": "nvmf_tgt_poll_group_000", 00:22:40.117 "admin_qpairs": 1, 00:22:40.117 "io_qpairs": 1, 00:22:40.117 "current_admin_qpairs": 1, 00:22:40.117 "current_io_qpairs": 1, 00:22:40.117 "pending_bdev_io": 0, 00:22:40.117 "completed_nvme_io": 12286, 00:22:40.117 "transports": [ 00:22:40.117 { 00:22:40.117 "trtype": "TCP" 00:22:40.117 } 00:22:40.117 ] 00:22:40.117 }, 00:22:40.117 { 00:22:40.117 "name": "nvmf_tgt_poll_group_001", 00:22:40.117 "admin_qpairs": 0, 00:22:40.117 "io_qpairs": 1, 00:22:40.117 "current_admin_qpairs": 0, 00:22:40.117 "current_io_qpairs": 1, 00:22:40.117 "pending_bdev_io": 0, 00:22:40.117 "completed_nvme_io": 8338, 00:22:40.117 "transports": [ 00:22:40.117 { 00:22:40.117 "trtype": "TCP" 00:22:40.117 } 00:22:40.117 ] 00:22:40.117 }, 00:22:40.117 { 00:22:40.117 "name": "nvmf_tgt_poll_group_002", 00:22:40.117 "admin_qpairs": 0, 00:22:40.117 "io_qpairs": 1, 00:22:40.117 "current_admin_qpairs": 0, 00:22:40.117 "current_io_qpairs": 1, 00:22:40.117 "pending_bdev_io": 0, 00:22:40.117 "completed_nvme_io": 8402, 00:22:40.117 "transports": [ 00:22:40.117 { 00:22:40.117 "trtype": "TCP" 00:22:40.117 } 00:22:40.117 ] 00:22:40.117 }, 00:22:40.117 { 00:22:40.117 "name": "nvmf_tgt_poll_group_003", 00:22:40.117 "admin_qpairs": 0, 00:22:40.117 "io_qpairs": 1, 00:22:40.117 "current_admin_qpairs": 0, 00:22:40.117 "current_io_qpairs": 1, 00:22:40.117 "pending_bdev_io": 0, 00:22:40.117 "completed_nvme_io": 13529, 00:22:40.117 "transports": [ 00:22:40.117 { 00:22:40.117 "trtype": "TCP" 00:22:40.117 } 00:22:40.117 ] 00:22:40.117 } 00:22:40.117 ] 00:22:40.117 }' 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:40.117 11:38:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2854522 00:22:48.238 Initializing NVMe Controllers 00:22:48.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:48.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:48.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:48.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:48.238 Initialization complete. Launching workers. 00:22:48.238 ======================================================== 00:22:48.238 Latency(us) 00:22:48.238 Device Information : IOPS MiB/s Average min max 00:22:48.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7164.11 27.98 8934.56 3295.74 14747.01 00:22:48.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4438.56 17.34 14429.30 5535.90 23994.23 00:22:48.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4460.26 17.42 14358.12 5349.03 24499.39 00:22:48.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6530.55 25.51 9806.01 2936.89 16333.83 00:22:48.238 ======================================================== 00:22:48.238 Total : 22593.48 88.26 11336.59 2936.89 24499.39 00:22:48.238 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.238 rmmod nvme_tcp 00:22:48.238 rmmod nvme_fabrics 00:22:48.238 rmmod nvme_keyring 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2854234 ']' 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2854234 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2854234 ']' 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2854234 00:22:48.238 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2854234 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2854234' 00:22:48.239 killing process with pid 2854234 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2854234 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2854234 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.239 11:38:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.209 11:38:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.209 11:38:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:50.209 11:38:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:51.586 11:38:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:54.116 11:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.409 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:59.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:59.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:59.410 Found net devices under 0000:af:00.0: cvl_0_0 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:59.410 Found net devices under 0000:af:00.1: cvl_0_1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:59.410 00:22:59.410 --- 10.0.0.2 ping statistics --- 00:22:59.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.410 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:22:59.410 00:22:59.410 --- 10.0.0.1 ping statistics --- 00:22:59.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.410 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:59.410 net.core.busy_poll = 1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:59.410 net.core.busy_read = 1 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:59.410 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2858584 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2858584 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2858584 ']' 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.411 11:38:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.411 [2024-07-15 11:38:33.638755] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:22:59.411 [2024-07-15 11:38:33.638817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.411 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.411 [2024-07-15 11:38:33.719336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.411 [2024-07-15 11:38:33.809884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.411 [2024-07-15 11:38:33.809928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.411 [2024-07-15 11:38:33.809943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.411 [2024-07-15 11:38:33.809951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.411 [2024-07-15 11:38:33.809959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.411 [2024-07-15 11:38:33.813279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.411 [2024-07-15 11:38:33.813317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.411 [2024-07-15 11:38:33.813428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.411 [2024-07-15 11:38:33.813430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 [2024-07-15 11:38:34.765501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 Malloc1 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.606 [2024-07-15 11:38:34.821649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.606 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2858869 00:23:00.607 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:00.607 11:38:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:00.607 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:02.512 "tick_rate": 2200000000, 00:23:02.512 "poll_groups": [ 00:23:02.512 { 00:23:02.512 "name": "nvmf_tgt_poll_group_000", 00:23:02.512 "admin_qpairs": 1, 00:23:02.512 "io_qpairs": 2, 00:23:02.512 "current_admin_qpairs": 1, 00:23:02.512 "current_io_qpairs": 2, 00:23:02.512 "pending_bdev_io": 0, 00:23:02.512 "completed_nvme_io": 15518, 00:23:02.512 "transports": [ 00:23:02.512 { 00:23:02.512 "trtype": "TCP" 00:23:02.512 } 00:23:02.512 ] 00:23:02.512 }, 00:23:02.512 { 00:23:02.512 "name": "nvmf_tgt_poll_group_001", 00:23:02.512 "admin_qpairs": 0, 00:23:02.512 "io_qpairs": 2, 00:23:02.512 "current_admin_qpairs": 0, 00:23:02.512 "current_io_qpairs": 2, 00:23:02.512 "pending_bdev_io": 0, 00:23:02.512 "completed_nvme_io": 10628, 00:23:02.512 "transports": [ 00:23:02.512 { 00:23:02.512 "trtype": "TCP" 00:23:02.512 } 00:23:02.512 ] 00:23:02.512 }, 00:23:02.512 { 00:23:02.512 "name": "nvmf_tgt_poll_group_002", 00:23:02.512 "admin_qpairs": 0, 00:23:02.512 "io_qpairs": 0, 00:23:02.512 "current_admin_qpairs": 0, 00:23:02.512 "current_io_qpairs": 0, 00:23:02.512 "pending_bdev_io": 0, 00:23:02.512 "completed_nvme_io": 0, 00:23:02.512 "transports": [ 00:23:02.512 { 00:23:02.512 "trtype": "TCP" 00:23:02.512 } 00:23:02.512 ] 00:23:02.512 }, 00:23:02.512 { 00:23:02.512 "name": "nvmf_tgt_poll_group_003", 00:23:02.512 "admin_qpairs": 0, 00:23:02.512 "io_qpairs": 0, 00:23:02.512 "current_admin_qpairs": 0, 00:23:02.512 "current_io_qpairs": 0, 00:23:02.512 "pending_bdev_io": 0, 00:23:02.512 "completed_nvme_io": 0, 00:23:02.512 "transports": [ 00:23:02.512 { 00:23:02.512 "trtype": "TCP" 00:23:02.512 } 00:23:02.512 ] 00:23:02.512 } 00:23:02.512 ] 00:23:02.512 }' 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:02.512 11:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2858869 00:23:10.633 Initializing NVMe Controllers 00:23:10.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:10.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:10.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:10.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:10.633 Initialization complete. Launching workers. 00:23:10.633 ======================================================== 00:23:10.633 Latency(us) 00:23:10.633 Device Information : IOPS MiB/s Average min max 00:23:10.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2981.10 11.64 21479.43 6628.18 72870.91 00:23:10.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4911.20 19.18 13032.97 2091.44 60573.01 00:23:10.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3790.20 14.81 16889.57 2533.85 61877.09 00:23:10.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 2587.20 10.11 24748.91 7699.40 76564.44 00:23:10.633 ======================================================== 00:23:10.633 Total : 14269.70 55.74 17946.07 2091.44 76564.44 00:23:10.633 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.633 rmmod nvme_tcp 00:23:10.633 rmmod nvme_fabrics 00:23:10.633 rmmod nvme_keyring 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2858584 ']' 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2858584 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2858584 ']' 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2858584 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.633 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2858584 00:23:10.891 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.891 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.891 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2858584' 00:23:10.891 killing process with pid 2858584 00:23:10.891 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2858584 00:23:10.891 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2858584 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.151 11:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.440 11:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.440 11:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:14.440 00:23:14.440 real 0m52.043s 00:23:14.440 user 2m50.983s 00:23:14.440 sys 0m9.545s 00:23:14.440 11:38:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.440 11:38:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.440 ************************************ 00:23:14.440 END TEST nvmf_perf_adq 00:23:14.440 ************************************ 00:23:14.440 11:38:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:14.440 11:38:48 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:14.440 11:38:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:14.440 11:38:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.440 11:38:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.440 ************************************ 00:23:14.440 START TEST nvmf_shutdown 00:23:14.440 ************************************ 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:14.440 * Looking for test storage... 00:23:14.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.440 11:38:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.441 ************************************ 00:23:14.441 START TEST nvmf_shutdown_tc1 00:23:14.441 ************************************ 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.441 11:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.715 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:19.975 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:19.975 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:19.975 Found net devices under 0000:af:00.0: cvl_0_0 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:19.975 Found net devices under 0000:af:00.1: cvl_0_1 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.975 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:23:20.235 00:23:20.235 --- 10.0.0.2 ping statistics --- 00:23:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.235 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:23:20.235 00:23:20.235 --- 10.0.0.1 ping statistics --- 00:23:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.235 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2864525 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2864525 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2864525 ']' 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.235 11:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:20.235 [2024-07-15 11:38:54.561460] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:20.235 [2024-07-15 11:38:54.561521] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.235 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.235 [2024-07-15 11:38:54.648592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.494 [2024-07-15 11:38:54.754276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.494 [2024-07-15 11:38:54.754322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.494 [2024-07-15 11:38:54.754335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.494 [2024-07-15 11:38:54.754346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.494 [2024-07-15 11:38:54.754355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.494 [2024-07-15 11:38:54.754476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.494 [2024-07-15 11:38:54.754589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.494 [2024-07-15 11:38:54.754700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.494 [2024-07-15 11:38:54.754702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.061 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.061 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:21.061 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.061 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.061 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 [2024-07-15 11:38:55.551984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.346 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.347 11:38:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.347 Malloc1 00:23:21.347 [2024-07-15 11:38:55.658229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.347 Malloc2 00:23:21.347 Malloc3 00:23:21.347 Malloc4 00:23:21.604 Malloc5 00:23:21.604 Malloc6 00:23:21.604 Malloc7 00:23:21.604 Malloc8 00:23:21.604 Malloc9 00:23:21.604 Malloc10 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2864842 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2864842 /var/tmp/bdevperf.sock 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2864842 ']' 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.862 { 00:23:21.862 "params": { 00:23:21.862 "name": "Nvme$subsystem", 00:23:21.862 "trtype": "$TEST_TRANSPORT", 00:23:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.862 "adrfam": "ipv4", 00:23:21.862 "trsvcid": "$NVMF_PORT", 00:23:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.862 "hdgst": ${hdgst:-false}, 00:23:21.862 "ddgst": ${ddgst:-false} 00:23:21.862 }, 00:23:21.862 "method": "bdev_nvme_attach_controller" 00:23:21.862 } 00:23:21.862 EOF 00:23:21.862 )") 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.862 { 00:23:21.862 "params": { 00:23:21.862 "name": "Nvme$subsystem", 00:23:21.862 "trtype": "$TEST_TRANSPORT", 00:23:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.862 "adrfam": "ipv4", 00:23:21.862 "trsvcid": "$NVMF_PORT", 00:23:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.862 "hdgst": ${hdgst:-false}, 00:23:21.862 "ddgst": ${ddgst:-false} 00:23:21.862 }, 00:23:21.862 "method": "bdev_nvme_attach_controller" 00:23:21.862 } 00:23:21.862 EOF 00:23:21.862 )") 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.862 { 00:23:21.862 "params": { 00:23:21.862 "name": "Nvme$subsystem", 00:23:21.862 "trtype": "$TEST_TRANSPORT", 00:23:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.862 "adrfam": "ipv4", 00:23:21.862 "trsvcid": "$NVMF_PORT", 00:23:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.862 "hdgst": ${hdgst:-false}, 00:23:21.862 "ddgst": ${ddgst:-false} 00:23:21.862 }, 00:23:21.862 "method": "bdev_nvme_attach_controller" 00:23:21.862 } 00:23:21.862 EOF 00:23:21.862 )") 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.862 { 00:23:21.862 "params": { 00:23:21.862 "name": "Nvme$subsystem", 00:23:21.862 "trtype": "$TEST_TRANSPORT", 00:23:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.862 "adrfam": "ipv4", 00:23:21.862 "trsvcid": "$NVMF_PORT", 00:23:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.862 "hdgst": ${hdgst:-false}, 00:23:21.862 "ddgst": ${ddgst:-false} 00:23:21.862 }, 00:23:21.862 "method": "bdev_nvme_attach_controller" 00:23:21.862 } 00:23:21.862 EOF 00:23:21.862 )") 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.862 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.862 { 00:23:21.862 "params": { 00:23:21.862 "name": "Nvme$subsystem", 00:23:21.862 "trtype": "$TEST_TRANSPORT", 00:23:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.862 "adrfam": "ipv4", 00:23:21.862 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.863 { 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme$subsystem", 00:23:21.863 "trtype": "$TEST_TRANSPORT", 00:23:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.863 { 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme$subsystem", 00:23:21.863 "trtype": "$TEST_TRANSPORT", 00:23:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 [2024-07-15 11:38:56.175548] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:21.863 [2024-07-15 11:38:56.175621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.863 { 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme$subsystem", 00:23:21.863 "trtype": "$TEST_TRANSPORT", 00:23:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.863 { 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme$subsystem", 00:23:21.863 "trtype": "$TEST_TRANSPORT", 00:23:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.863 { 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme$subsystem", 00:23:21.863 "trtype": "$TEST_TRANSPORT", 00:23:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "$NVMF_PORT", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.863 "hdgst": ${hdgst:-false}, 00:23:21.863 "ddgst": ${ddgst:-false} 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 } 00:23:21.863 EOF 00:23:21.863 )") 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:21.863 11:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme1", 00:23:21.863 "trtype": "tcp", 00:23:21.863 "traddr": "10.0.0.2", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "4420", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.863 "hdgst": false, 00:23:21.863 "ddgst": false 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 },{ 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme2", 00:23:21.863 "trtype": "tcp", 00:23:21.863 "traddr": "10.0.0.2", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "4420", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.863 "hdgst": false, 00:23:21.863 "ddgst": false 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 },{ 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme3", 00:23:21.863 "trtype": "tcp", 00:23:21.863 "traddr": "10.0.0.2", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "4420", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:21.863 "hdgst": false, 00:23:21.863 "ddgst": false 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 },{ 00:23:21.863 "params": { 00:23:21.863 "name": "Nvme4", 00:23:21.863 "trtype": "tcp", 00:23:21.863 "traddr": "10.0.0.2", 00:23:21.863 "adrfam": "ipv4", 00:23:21.863 "trsvcid": "4420", 00:23:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:21.863 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:21.863 "hdgst": false, 00:23:21.863 "ddgst": false 00:23:21.863 }, 00:23:21.863 "method": "bdev_nvme_attach_controller" 00:23:21.863 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme5", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme6", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme7", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme8", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme9", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 },{ 00:23:21.864 "params": { 00:23:21.864 "name": "Nvme10", 00:23:21.864 "trtype": "tcp", 00:23:21.864 "traddr": "10.0.0.2", 00:23:21.864 "adrfam": "ipv4", 00:23:21.864 "trsvcid": "4420", 00:23:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:21.864 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:21.864 "hdgst": false, 00:23:21.864 "ddgst": false 00:23:21.864 }, 00:23:21.864 "method": "bdev_nvme_attach_controller" 00:23:21.864 }' 00:23:21.864 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.864 [2024-07-15 11:38:56.261275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.122 [2024-07-15 11:38:56.347534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2864842 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:24.021 11:38:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:24.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2864842 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2864525 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.957 { 00:23:24.957 "params": { 00:23:24.957 "name": "Nvme$subsystem", 00:23:24.957 "trtype": "$TEST_TRANSPORT", 00:23:24.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.957 "adrfam": "ipv4", 00:23:24.957 "trsvcid": "$NVMF_PORT", 00:23:24.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.957 "hdgst": ${hdgst:-false}, 00:23:24.957 "ddgst": ${ddgst:-false} 00:23:24.957 }, 00:23:24.957 "method": "bdev_nvme_attach_controller" 00:23:24.957 } 00:23:24.957 EOF 00:23:24.957 )") 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.957 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.957 { 00:23:24.957 "params": { 00:23:24.957 "name": "Nvme$subsystem", 00:23:24.957 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 [2024-07-15 11:38:59.210696] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 [2024-07-15 11:38:59.210755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865393 ] 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.958 { 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme$subsystem", 00:23:24.958 "trtype": "$TEST_TRANSPORT", 00:23:24.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "$NVMF_PORT", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.958 "hdgst": ${hdgst:-false}, 00:23:24.958 "ddgst": ${ddgst:-false} 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 } 00:23:24.958 EOF 00:23:24.958 )") 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:24.958 11:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme1", 00:23:24.958 "trtype": "tcp", 00:23:24.958 "traddr": "10.0.0.2", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "4420", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.958 "hdgst": false, 00:23:24.958 "ddgst": false 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 },{ 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme2", 00:23:24.958 "trtype": "tcp", 00:23:24.958 "traddr": "10.0.0.2", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "4420", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:24.958 "hdgst": false, 00:23:24.958 "ddgst": false 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 },{ 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme3", 00:23:24.958 "trtype": "tcp", 00:23:24.958 "traddr": "10.0.0.2", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "4420", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:24.958 "hdgst": false, 00:23:24.958 "ddgst": false 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 },{ 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme4", 00:23:24.958 "trtype": "tcp", 00:23:24.958 "traddr": "10.0.0.2", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "4420", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:24.958 "hdgst": false, 00:23:24.958 "ddgst": false 00:23:24.958 }, 00:23:24.958 "method": "bdev_nvme_attach_controller" 00:23:24.958 },{ 00:23:24.958 "params": { 00:23:24.958 "name": "Nvme5", 00:23:24.958 "trtype": "tcp", 00:23:24.958 "traddr": "10.0.0.2", 00:23:24.958 "adrfam": "ipv4", 00:23:24.958 "trsvcid": "4420", 00:23:24.958 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:24.958 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 },{ 00:23:24.959 "params": { 00:23:24.959 "name": "Nvme6", 00:23:24.959 "trtype": "tcp", 00:23:24.959 "traddr": "10.0.0.2", 00:23:24.959 "adrfam": "ipv4", 00:23:24.959 "trsvcid": "4420", 00:23:24.959 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:24.959 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 },{ 00:23:24.959 "params": { 00:23:24.959 "name": "Nvme7", 00:23:24.959 "trtype": "tcp", 00:23:24.959 "traddr": "10.0.0.2", 00:23:24.959 "adrfam": "ipv4", 00:23:24.959 "trsvcid": "4420", 00:23:24.959 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:24.959 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 },{ 00:23:24.959 "params": { 00:23:24.959 "name": "Nvme8", 00:23:24.959 "trtype": "tcp", 00:23:24.959 "traddr": "10.0.0.2", 00:23:24.959 "adrfam": "ipv4", 00:23:24.959 "trsvcid": "4420", 00:23:24.959 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:24.959 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 },{ 00:23:24.959 "params": { 00:23:24.959 "name": "Nvme9", 00:23:24.959 "trtype": "tcp", 00:23:24.959 "traddr": "10.0.0.2", 00:23:24.959 "adrfam": "ipv4", 00:23:24.959 "trsvcid": "4420", 00:23:24.959 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:24.959 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 },{ 00:23:24.959 "params": { 00:23:24.959 "name": "Nvme10", 00:23:24.959 "trtype": "tcp", 00:23:24.959 "traddr": "10.0.0.2", 00:23:24.959 "adrfam": "ipv4", 00:23:24.959 "trsvcid": "4420", 00:23:24.959 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:24.959 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:24.959 "hdgst": false, 00:23:24.959 "ddgst": false 00:23:24.959 }, 00:23:24.959 "method": "bdev_nvme_attach_controller" 00:23:24.959 }' 00:23:24.959 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.959 [2024-07-15 11:38:59.299821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.959 [2024-07-15 11:38:59.386610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.338 Running I/O for 1 seconds... 00:23:27.715 00:23:27.715 Latency(us) 00:23:27.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.715 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme1n1 : 1.12 171.26 10.70 0.00 0.00 368250.26 30742.34 326011.81 00:23:27.715 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme2n1 : 1.13 169.99 10.62 0.00 0.00 363719.84 51237.24 295507.78 00:23:27.715 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme3n1 : 1.21 211.55 13.22 0.00 0.00 287072.81 31218.97 341263.83 00:23:27.715 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme4n1 : 1.12 228.67 14.29 0.00 0.00 258723.37 23235.49 285975.27 00:23:27.715 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme5n1 : 1.13 169.61 10.60 0.00 0.00 341448.77 54811.93 341263.83 00:23:27.715 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme6n1 : 1.17 164.23 10.26 0.00 0.00 343233.01 35746.91 343170.33 00:23:27.715 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme7n1 : 1.23 208.22 13.01 0.00 0.00 268330.94 9175.04 320292.31 00:23:27.715 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme8n1 : 1.17 218.42 13.65 0.00 0.00 248020.25 22758.87 301227.29 00:23:27.715 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme9n1 : 1.22 210.54 13.16 0.00 0.00 253243.11 32648.84 303133.79 00:23:27.715 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.715 Verification LBA range: start 0x0 length 0x400 00:23:27.715 Nvme10n1 : 1.24 206.39 12.90 0.00 0.00 253147.75 6047.19 341263.83 00:23:27.715 =================================================================================================================== 00:23:27.715 Total : 1958.87 122.43 0.00 0.00 292336.35 6047.19 343170.33 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.715 rmmod nvme_tcp 00:23:27.715 rmmod nvme_fabrics 00:23:27.715 rmmod nvme_keyring 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2864525 ']' 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2864525 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2864525 ']' 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2864525 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.715 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2864525 00:23:27.974 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.974 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.974 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2864525' 00:23:27.974 killing process with pid 2864525 00:23:27.974 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2864525 00:23:27.974 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2864525 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.542 11:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.447 00:23:30.447 real 0m16.169s 00:23:30.447 user 0m37.921s 00:23:30.447 sys 0m5.793s 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.447 ************************************ 00:23:30.447 END TEST nvmf_shutdown_tc1 00:23:30.447 ************************************ 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.447 11:39:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:30.707 ************************************ 00:23:30.707 START TEST nvmf_shutdown_tc2 00:23:30.707 ************************************ 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:30.707 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:30.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:30.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:30.708 Found net devices under 0000:af:00.0: cvl_0_0 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:30.708 Found net devices under 0000:af:00.1: cvl_0_1 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.708 11:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.708 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.708 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.708 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.708 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:30.968 00:23:30.968 --- 10.0.0.2 ping statistics --- 00:23:30.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.968 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:30.968 00:23:30.968 --- 10.0.0.1 ping statistics --- 00:23:30.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.968 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.968 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2866669 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2866669 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2866669 ']' 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.969 11:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.969 [2024-07-15 11:39:05.302225] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:30.969 [2024-07-15 11:39:05.302287] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.969 [2024-07-15 11:39:05.389462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.228 [2024-07-15 11:39:05.494500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.228 [2024-07-15 11:39:05.494547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.228 [2024-07-15 11:39:05.494560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.228 [2024-07-15 11:39:05.494571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.228 [2024-07-15 11:39:05.494581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.228 [2024-07-15 11:39:05.494712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.228 [2024-07-15 11:39:05.494844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.228 [2024-07-15 11:39:05.494989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.228 [2024-07-15 11:39:05.494991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.795 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.795 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:31.795 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.795 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:31.795 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.054 [2024-07-15 11:39:06.290478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.054 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.054 Malloc1 00:23:32.054 [2024-07-15 11:39:06.396864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.054 Malloc2 00:23:32.054 Malloc3 00:23:32.054 Malloc4 00:23:32.313 Malloc5 00:23:32.313 Malloc6 00:23:32.313 Malloc7 00:23:32.313 Malloc8 00:23:32.313 Malloc9 00:23:32.573 Malloc10 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2866981 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2866981 /var/tmp/bdevperf.sock 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2866981 ']' 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.573 { 00:23:32.573 "params": { 00:23:32.573 "name": "Nvme$subsystem", 00:23:32.573 "trtype": "$TEST_TRANSPORT", 00:23:32.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.573 "adrfam": "ipv4", 00:23:32.573 "trsvcid": "$NVMF_PORT", 00:23:32.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.573 "hdgst": ${hdgst:-false}, 00:23:32.573 "ddgst": ${ddgst:-false} 00:23:32.573 }, 00:23:32.573 "method": "bdev_nvme_attach_controller" 00:23:32.573 } 00:23:32.573 EOF 00:23:32.573 )") 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.573 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.573 { 00:23:32.573 "params": { 00:23:32.573 "name": "Nvme$subsystem", 00:23:32.573 "trtype": "$TEST_TRANSPORT", 00:23:32.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.573 "adrfam": "ipv4", 00:23:32.573 "trsvcid": "$NVMF_PORT", 00:23:32.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.573 "hdgst": ${hdgst:-false}, 00:23:32.573 "ddgst": ${ddgst:-false} 00:23:32.573 }, 00:23:32.573 "method": "bdev_nvme_attach_controller" 00:23:32.573 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 [2024-07-15 11:39:06.898491] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:32.574 [2024-07-15 11:39:06.898559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866981 ] 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.574 { 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme$subsystem", 00:23:32.574 "trtype": "$TEST_TRANSPORT", 00:23:32.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "$NVMF_PORT", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.574 "hdgst": ${hdgst:-false}, 00:23:32.574 "ddgst": ${ddgst:-false} 00:23:32.574 }, 00:23:32.574 "method": "bdev_nvme_attach_controller" 00:23:32.574 } 00:23:32.574 EOF 00:23:32.574 )") 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:32.574 11:39:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:32.574 "params": { 00:23:32.574 "name": "Nvme1", 00:23:32.574 "trtype": "tcp", 00:23:32.574 "traddr": "10.0.0.2", 00:23:32.574 "adrfam": "ipv4", 00:23:32.574 "trsvcid": "4420", 00:23:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme2", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme3", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme4", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme5", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme6", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme7", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme8", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme9", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 },{ 00:23:32.575 "params": { 00:23:32.575 "name": "Nvme10", 00:23:32.575 "trtype": "tcp", 00:23:32.575 "traddr": "10.0.0.2", 00:23:32.575 "adrfam": "ipv4", 00:23:32.575 "trsvcid": "4420", 00:23:32.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:32.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:32.575 "hdgst": false, 00:23:32.575 "ddgst": false 00:23:32.575 }, 00:23:32.575 "method": "bdev_nvme_attach_controller" 00:23:32.575 }' 00:23:32.575 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.575 [2024-07-15 11:39:06.982132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.834 [2024-07-15 11:39:07.067712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.279 Running I/O for 10 seconds... 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.539 11:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.797 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:34.797 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:34.797 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:35.056 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2866981 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2866981 ']' 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2866981 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2866981 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2866981' 00:23:35.316 killing process with pid 2866981 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2866981 00:23:35.316 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2866981 00:23:35.316 Received shutdown signal, test time was about 1.082430 seconds 00:23:35.316 00:23:35.316 Latency(us) 00:23:35.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme1n1 : 1.07 179.56 11.22 0.00 0.00 350832.95 31695.59 343170.33 00:23:35.316 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme2n1 : 1.05 183.02 11.44 0.00 0.00 337342.22 35985.22 289788.28 00:23:35.316 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme3n1 : 1.02 187.73 11.73 0.00 0.00 320348.78 53858.68 291694.78 00:23:35.316 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme4n1 : 1.04 250.45 15.65 0.00 0.00 233616.92 3634.27 284068.77 00:23:35.316 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme5n1 : 1.05 188.39 11.77 0.00 0.00 303071.56 6136.55 263097.25 00:23:35.316 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme6n1 : 1.06 185.78 11.61 0.00 0.00 300717.47 4140.68 350796.33 00:23:35.316 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme7n1 : 1.03 186.19 11.64 0.00 0.00 291769.87 31695.59 247845.24 00:23:35.316 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme8n1 : 1.02 188.55 11.78 0.00 0.00 279823.52 26810.18 305040.29 00:23:35.316 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme9n1 : 1.07 179.05 11.19 0.00 0.00 289487.44 12571.00 352702.84 00:23:35.316 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.316 Verification LBA range: start 0x0 length 0x400 00:23:35.316 Nvme10n1 : 1.08 177.54 11.10 0.00 0.00 284803.10 12332.68 335544.32 00:23:35.316 =================================================================================================================== 00:23:35.316 Total : 1906.27 119.14 0.00 0.00 296931.85 3634.27 352702.84 00:23:35.575 11:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.954 rmmod nvme_tcp 00:23:36.954 rmmod nvme_fabrics 00:23:36.954 rmmod nvme_keyring 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2866669 ']' 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2866669 ']' 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2866669' 00:23:36.954 killing process with pid 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2866669 00:23:36.954 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2866669 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.213 11:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.750 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.750 00:23:39.750 real 0m8.733s 00:23:39.750 user 0m27.364s 00:23:39.750 sys 0m1.536s 00:23:39.750 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 ************************************ 00:23:39.751 END TEST nvmf_shutdown_tc2 00:23:39.751 ************************************ 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 ************************************ 00:23:39.751 START TEST nvmf_shutdown_tc3 00:23:39.751 ************************************ 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:39.751 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:39.751 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:39.751 Found net devices under 0000:af:00.0: cvl_0_0 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:39.751 Found net devices under 0000:af:00.1: cvl_0_1 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:23:39.751 00:23:39.751 --- 10.0.0.2 ping statistics --- 00:23:39.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.751 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:39.751 11:39:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:23:39.751 00:23:39.751 --- 10.0.0.1 ping statistics --- 00:23:39.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.751 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.751 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2868795 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2868795 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2868795 ']' 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.752 11:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 [2024-07-15 11:39:14.103799] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:39.752 [2024-07-15 11:39:14.103856] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.752 [2024-07-15 11:39:14.192703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.011 [2024-07-15 11:39:14.298966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.011 [2024-07-15 11:39:14.299015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.011 [2024-07-15 11:39:14.299028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.011 [2024-07-15 11:39:14.299039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.011 [2024-07-15 11:39:14.299049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.011 [2024-07-15 11:39:14.299118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.011 [2024-07-15 11:39:14.299230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.011 [2024-07-15 11:39:14.299344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.011 [2024-07-15 11:39:14.299346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.946 [2024-07-15 11:39:15.112938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.946 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.947 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.947 Malloc1 00:23:40.947 [2024-07-15 11:39:15.230912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.947 Malloc2 00:23:40.947 Malloc3 00:23:40.947 Malloc4 00:23:41.205 Malloc5 00:23:41.205 Malloc6 00:23:41.205 Malloc7 00:23:41.205 Malloc8 00:23:41.205 Malloc9 00:23:41.205 Malloc10 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2869112 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2869112 /var/tmp/bdevperf.sock 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2869112 ']' 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.464 { 00:23:41.464 "params": { 00:23:41.464 "name": "Nvme$subsystem", 00:23:41.464 "trtype": "$TEST_TRANSPORT", 00:23:41.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.464 "adrfam": "ipv4", 00:23:41.464 "trsvcid": "$NVMF_PORT", 00:23:41.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.464 "hdgst": ${hdgst:-false}, 00:23:41.464 "ddgst": ${ddgst:-false} 00:23:41.464 }, 00:23:41.464 "method": "bdev_nvme_attach_controller" 00:23:41.464 } 00:23:41.464 EOF 00:23:41.464 )") 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.464 { 00:23:41.464 "params": { 00:23:41.464 "name": "Nvme$subsystem", 00:23:41.464 "trtype": "$TEST_TRANSPORT", 00:23:41.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.464 "adrfam": "ipv4", 00:23:41.464 "trsvcid": "$NVMF_PORT", 00:23:41.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.464 "hdgst": ${hdgst:-false}, 00:23:41.464 "ddgst": ${ddgst:-false} 00:23:41.464 }, 00:23:41.464 "method": "bdev_nvme_attach_controller" 00:23:41.464 } 00:23:41.464 EOF 00:23:41.464 )") 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.464 { 00:23:41.464 "params": { 00:23:41.464 "name": "Nvme$subsystem", 00:23:41.464 "trtype": "$TEST_TRANSPORT", 00:23:41.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.464 "adrfam": "ipv4", 00:23:41.464 "trsvcid": "$NVMF_PORT", 00:23:41.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.464 "hdgst": ${hdgst:-false}, 00:23:41.464 "ddgst": ${ddgst:-false} 00:23:41.464 }, 00:23:41.464 "method": "bdev_nvme_attach_controller" 00:23:41.464 } 00:23:41.464 EOF 00:23:41.464 )") 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.464 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.464 { 00:23:41.464 "params": { 00:23:41.464 "name": "Nvme$subsystem", 00:23:41.464 "trtype": "$TEST_TRANSPORT", 00:23:41.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.464 "adrfam": "ipv4", 00:23:41.464 "trsvcid": "$NVMF_PORT", 00:23:41.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 [2024-07-15 11:39:15.770022] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:41.465 [2024-07-15 11:39:15.770081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869112 ] 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:41.465 { 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme$subsystem", 00:23:41.465 "trtype": "$TEST_TRANSPORT", 00:23:41.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "$NVMF_PORT", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.465 "hdgst": ${hdgst:-false}, 00:23:41.465 "ddgst": ${ddgst:-false} 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 } 00:23:41.465 EOF 00:23:41.465 )") 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:41.465 11:39:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme1", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme2", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme3", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme4", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme5", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme6", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme7", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme8", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:41.465 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:41.465 "hdgst": false, 00:23:41.465 "ddgst": false 00:23:41.465 }, 00:23:41.465 "method": "bdev_nvme_attach_controller" 00:23:41.465 },{ 00:23:41.465 "params": { 00:23:41.465 "name": "Nvme9", 00:23:41.465 "trtype": "tcp", 00:23:41.465 "traddr": "10.0.0.2", 00:23:41.465 "adrfam": "ipv4", 00:23:41.465 "trsvcid": "4420", 00:23:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:41.466 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:41.466 "hdgst": false, 00:23:41.466 "ddgst": false 00:23:41.466 }, 00:23:41.466 "method": "bdev_nvme_attach_controller" 00:23:41.466 },{ 00:23:41.466 "params": { 00:23:41.466 "name": "Nvme10", 00:23:41.466 "trtype": "tcp", 00:23:41.466 "traddr": "10.0.0.2", 00:23:41.466 "adrfam": "ipv4", 00:23:41.466 "trsvcid": "4420", 00:23:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:41.466 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:41.466 "hdgst": false, 00:23:41.466 "ddgst": false 00:23:41.466 }, 00:23:41.466 "method": "bdev_nvme_attach_controller" 00:23:41.466 }' 00:23:41.466 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.466 [2024-07-15 11:39:15.852547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.725 [2024-07-15 11:39:15.937384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.100 Running I/O for 10 seconds... 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.360 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.619 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.619 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:43.619 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:43.619 11:39:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:43.877 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:43.878 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2868795 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2868795 ']' 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2868795 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2868795 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2868795' 00:23:44.150 killing process with pid 2868795 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2868795 00:23:44.150 11:39:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2868795 00:23:44.150 [2024-07-15 11:39:18.506813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.506963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.506989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.507986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.508202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be61b0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.511007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8bb0 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.150 [2024-07-15 11:39:18.513598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.513982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.514469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6650 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.517984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.151 [2024-07-15 11:39:18.518626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.518897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6af0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cdd10 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2832630 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b1ff0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29d59c0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.152 [2024-07-15 11:39:18.521929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with [2024-07-15 11:39:18.521931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:44.152 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.152 [2024-07-15 11:39:18.521944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2802120 is same [2024-07-15 11:39:18.521943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with with the state(5) to be set 00:23:44.152 the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.521994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.152 [2024-07-15 11:39:18.522193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.522584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6fb0 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.524990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.525011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.525030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.525049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.153 [2024-07-15 11:39:18.525068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7450 is same with the state(5) to be set 00:23:44.154 [2024-07-15 11:39:18.525721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.525993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.154 [2024-07-15 11:39:18.526414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.154 [2024-07-15 11:39:18.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.526981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.526992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.527127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.527562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x294e650 was disconnected and freed. reset controller. 00:23:44.155 [2024-07-15 11:39:18.528375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.155 [2024-07-15 11:39:18.528568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.155 [2024-07-15 11:39:18.528578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.528985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.528997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.156 [2024-07-15 11:39:18.529480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.156 [2024-07-15 11:39:18.529492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with [2024-07-15 11:39:18.529674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:12the state(5) to be set 00:23:44.157 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with [2024-07-15 11:39:18.529702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:12the state(5) to be set 00:23:44.157 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with [2024-07-15 11:39:18.529745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:44.157 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with [2024-07-15 11:39:18.529785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12the state(5) to be set 00:23:44.157 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.157 [2024-07-15 11:39:18.529799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.157 [2024-07-15 11:39:18.529802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:44.157 [2024-07-15 11:39:18.529839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529889] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27fc2d0 was disconnected and freed. reset controller. 00:23:44.157 [2024-07-15 11:39:18.529896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.529988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.157 [2024-07-15 11:39:18.530829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.530847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.530865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7d90 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.531817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:44.158 [2024-07-15 11:39:18.531851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b1ff0 (9): Bad file descriptor 00:23:44.158 [2024-07-15 11:39:18.531887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.531900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.531910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.531920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.531931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.531941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.531951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.531960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.531970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29c3250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.531990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cdd10 (9): Bad file descriptor 00:23:44.158 [2024-07-15 11:39:18.532009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2832630 (9): Bad file descriptor 00:23:44.158 [2024-07-15 11:39:18.532046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303610 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.158 [2024-07-15 11:39:18.532237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.158 [2024-07-15 11:39:18.532248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2823de0 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29d59c0 (9): Bad file descriptor 00:23:44.158 [2024-07-15 11:39:18.532293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2802120 (9): Bad file descriptor 00:23:44.158 [2024-07-15 11:39:18.532868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.532999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.158 [2024-07-15 11:39:18.533347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.533596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8250 is same with the state(5) to be set 00:23:44.159 [2024-07-15 11:39:18.534711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:44.159 [2024-07-15 11:39:18.534744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303610 (9): Bad file descriptor 00:23:44.159 [2024-07-15 11:39:18.534808] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.159 [2024-07-15 11:39:18.534861] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.159 [2024-07-15 11:39:18.534913] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.159 [2024-07-15 11:39:18.535219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.159 [2024-07-15 11:39:18.535682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.159 [2024-07-15 11:39:18.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.535982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.535994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.160 [2024-07-15 11:39:18.536607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.160 [2024-07-15 11:39:18.536618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.536628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.536696] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x294d150 was disconnected and freed. reset controller. 00:23:44.161 [2024-07-15 11:39:18.537244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.161 [2024-07-15 11:39:18.537271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b1ff0 with addr=10.0.0.2, port=4420 00:23:44.161 [2024-07-15 11:39:18.537282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b1ff0 is same with the state(5) to be set 00:23:44.161 [2024-07-15 11:39:18.539186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:44.161 [2024-07-15 11:39:18.539243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cacb0 (9): Bad file descriptor 00:23:44.161 [2024-07-15 11:39:18.539459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.161 [2024-07-15 11:39:18.539474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2303610 with addr=10.0.0.2, port=4420 00:23:44.161 [2024-07-15 11:39:18.539484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303610 is same with the state(5) to be set 00:23:44.161 [2024-07-15 11:39:18.539496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b1ff0 (9): Bad file descriptor 00:23:44.161 [2024-07-15 11:39:18.539546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.539989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.539999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.161 [2024-07-15 11:39:18.540183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.161 [2024-07-15 11:39:18.540192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.540651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.540663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.547872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.547891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.547902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.547916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.547926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.547942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.547953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.547966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.547977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.162 [2024-07-15 11:39:18.548157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.162 [2024-07-15 11:39:18.548167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.548179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cfae0 is same with the state(5) to be set 00:23:44.163 [2024-07-15 11:39:18.548239] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28cfae0 was disconnected and freed. reset controller. 00:23:44.163 [2024-07-15 11:39:18.548308] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.163 [2024-07-15 11:39:18.548495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303610 (9): Bad file descriptor 00:23:44.163 [2024-07-15 11:39:18.548516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:44.163 [2024-07-15 11:39:18.548526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:44.163 [2024-07-15 11:39:18.548542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:44.163 [2024-07-15 11:39:18.548577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29c3250 (9): Bad file descriptor 00:23:44.163 [2024-07-15 11:39:18.548611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.163 [2024-07-15 11:39:18.548647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.163 [2024-07-15 11:39:18.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.548671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.163 [2024-07-15 11:39:18.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.548692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.163 [2024-07-15 11:39:18.548704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.548715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.163 [2024-07-15 11:39:18.548725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.548735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29caac0 is same with the state(5) to be set 00:23:44.163 [2024-07-15 11:39:18.548757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2823de0 (9): Bad file descriptor 00:23:44.163 [2024-07-15 11:39:18.548795] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.163 [2024-07-15 11:39:18.548811] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.163 [2024-07-15 11:39:18.550414] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.163 [2024-07-15 11:39:18.550837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.163 [2024-07-15 11:39:18.550868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:44.163 [2024-07-15 11:39:18.551085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.163 [2024-07-15 11:39:18.551106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29cacb0 with addr=10.0.0.2, port=4420 00:23:44.163 [2024-07-15 11:39:18.551118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cacb0 is same with the state(5) to be set 00:23:44.163 [2024-07-15 11:39:18.551129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:44.163 [2024-07-15 11:39:18.551139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:44.163 [2024-07-15 11:39:18.551150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:44.163 [2024-07-15 11:39:18.551205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.163 [2024-07-15 11:39:18.551699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.163 [2024-07-15 11:39:18.551710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.551983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.551994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.164 [2024-07-15 11:39:18.552701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.164 [2024-07-15 11:39:18.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.552734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.552746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294b130 is same with the state(5) to be set 00:23:44.165 [2024-07-15 11:39:18.554360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.554979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.554992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.165 [2024-07-15 11:39:18.555317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.165 [2024-07-15 11:39:18.555330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.555889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.555901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cd130 is same with the state(5) to be set 00:23:44.166 [2024-07-15 11:39:18.557384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.166 [2024-07-15 11:39:18.557776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.166 [2024-07-15 11:39:18.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.557984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.557994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.167 [2024-07-15 11:39:18.558569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.167 [2024-07-15 11:39:18.558578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.558773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.558783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ce5c0 is same with the state(5) to be set 00:23:44.168 [2024-07-15 11:39:18.560306] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:44.168 [2024-07-15 11:39:18.560345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.168 [2024-07-15 11:39:18.560357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.168 [2024-07-15 11:39:18.560372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:44.168 [2024-07-15 11:39:18.560384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:44.168 [2024-07-15 11:39:18.560544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.168 [2024-07-15 11:39:18.560563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2832630 with addr=10.0.0.2, port=4420 00:23:44.168 [2024-07-15 11:39:18.560574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2832630 is same with the state(5) to be set 00:23:44.168 [2024-07-15 11:39:18.560588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cacb0 (9): Bad file descriptor 00:23:44.168 [2024-07-15 11:39:18.560624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29caac0 (9): Bad file descriptor 00:23:44.168 [2024-07-15 11:39:18.560669] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.168 [2024-07-15 11:39:18.561250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.168 [2024-07-15 11:39:18.561281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2802120 with addr=10.0.0.2, port=4420 00:23:44.168 [2024-07-15 11:39:18.561292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2802120 is same with the state(5) to be set 00:23:44.168 [2024-07-15 11:39:18.561471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.168 [2024-07-15 11:39:18.561485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29cdd10 with addr=10.0.0.2, port=4420 00:23:44.168 [2024-07-15 11:39:18.561495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cdd10 is same with the state(5) to be set 00:23:44.168 [2024-07-15 11:39:18.561647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.168 [2024-07-15 11:39:18.561662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29d59c0 with addr=10.0.0.2, port=4420 00:23:44.168 [2024-07-15 11:39:18.561672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29d59c0 is same with the state(5) to be set 00:23:44.168 [2024-07-15 11:39:18.561685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2832630 (9): Bad file descriptor 00:23:44.168 [2024-07-15 11:39:18.561697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:44.168 [2024-07-15 11:39:18.561711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:44.168 [2024-07-15 11:39:18.561721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:44.168 [2024-07-15 11:39:18.561741] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.168 [2024-07-15 11:39:18.562733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.562974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.562986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.168 [2024-07-15 11:39:18.563190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.168 [2024-07-15 11:39:18.563200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.563983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.563993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.169 [2024-07-15 11:39:18.564162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.169 [2024-07-15 11:39:18.564172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.564182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d0f70 is same with the state(5) to be set 00:23:44.170 [2024-07-15 11:39:18.565654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.565979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.565989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.170 [2024-07-15 11:39:18.566561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.170 [2024-07-15 11:39:18.566572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.566986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.566995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.567007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.567016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.567037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.567049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.171 [2024-07-15 11:39:18.567058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.171 [2024-07-15 11:39:18.567069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fd7b0 is same with the state(5) to be set 00:23:44.171 [2024-07-15 11:39:18.568511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:44.171 [2024-07-15 11:39:18.568531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:44.171 [2024-07-15 11:39:18.568543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.171 [2024-07-15 11:39:18.568553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:44.171 [2024-07-15 11:39:18.568565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:44.171 [2024-07-15 11:39:18.568609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2802120 (9): Bad file descriptor 00:23:44.171 [2024-07-15 11:39:18.568623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cdd10 (9): Bad file descriptor 00:23:44.171 [2024-07-15 11:39:18.568635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29d59c0 (9): Bad file descriptor 00:23:44.171 [2024-07-15 11:39:18.568647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:44.171 [2024-07-15 11:39:18.568655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:44.171 [2024-07-15 11:39:18.568665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:44.171 [2024-07-15 11:39:18.568762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.171 [2024-07-15 11:39:18.568943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.171 [2024-07-15 11:39:18.568959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b1ff0 with addr=10.0.0.2, port=4420 00:23:44.171 [2024-07-15 11:39:18.568969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b1ff0 is same with the state(5) to be set 00:23:44.171 [2024-07-15 11:39:18.569156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.171 [2024-07-15 11:39:18.569171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2303610 with addr=10.0.0.2, port=4420 00:23:44.171 [2024-07-15 11:39:18.569182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303610 is same with the state(5) to be set 00:23:44.171 [2024-07-15 11:39:18.569337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.171 [2024-07-15 11:39:18.569353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2823de0 with addr=10.0.0.2, port=4420 00:23:44.171 [2024-07-15 11:39:18.569362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2823de0 is same with the state(5) to be set 00:23:44.171 [2024-07-15 11:39:18.569522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.171 [2024-07-15 11:39:18.569535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29c3250 with addr=10.0.0.2, port=4420 00:23:44.171 [2024-07-15 11:39:18.569545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29c3250 is same with the state(5) to be set 00:23:44.171 [2024-07-15 11:39:18.569554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.171 [2024-07-15 11:39:18.569562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.171 [2024-07-15 11:39:18.569571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.171 [2024-07-15 11:39:18.569585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:44.171 [2024-07-15 11:39:18.569594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:44.171 [2024-07-15 11:39:18.569602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:44.172 [2024-07-15 11:39:18.569615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:44.172 [2024-07-15 11:39:18.569624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:44.172 [2024-07-15 11:39:18.569632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:44.172 [2024-07-15 11:39:18.570321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b1ff0 (9): Bad file descriptor 00:23:44.172 [2024-07-15 11:39:18.570373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303610 (9): Bad file descriptor 00:23:44.172 [2024-07-15 11:39:18.570385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2823de0 (9): Bad file descriptor 00:23:44.172 [2024-07-15 11:39:18.570397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29c3250 (9): Bad file descriptor 00:23:44.172 [2024-07-15 11:39:18.570438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:44.172 [2024-07-15 11:39:18.570448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:44.172 [2024-07-15 11:39:18.570457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:44.172 [2024-07-15 11:39:18.570470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:44.172 [2024-07-15 11:39:18.570478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:44.172 [2024-07-15 11:39:18.570487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:44.172 [2024-07-15 11:39:18.570500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:44.172 [2024-07-15 11:39:18.570508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:44.172 [2024-07-15 11:39:18.570521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:44.172 [2024-07-15 11:39:18.570533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:44.172 [2024-07-15 11:39:18.570542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:44.172 [2024-07-15 11:39:18.570550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:44.172 [2024-07-15 11:39:18.570621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.172 [2024-07-15 11:39:18.570717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.570985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.570996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.172 [2024-07-15 11:39:18.571390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.172 [2024-07-15 11:39:18.571402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.571992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.572015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.572059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.572080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.173 [2024-07-15 11:39:18.572101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.173 [2024-07-15 11:39:18.572111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294bdd0 is same with the state(5) to be set 00:23:44.173 [2024-07-15 11:39:18.574555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:44.173 [2024-07-15 11:39:18.574582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:44.173 [2024-07-15 11:39:18.574594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:44.173 task offset: 16384 on job bdev=Nvme10n1 fails 00:23:44.173 00:23:44.173 Latency(us) 00:23:44.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme1n1 ended in about 1.03 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme1n1 : 1.03 124.04 7.75 62.02 0.00 339720.38 54096.99 314572.80 00:23:44.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme2n1 ended in about 1.04 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme2n1 : 1.04 123.66 7.73 61.83 0.00 332857.72 45756.04 312666.30 00:23:44.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme3n1 ended in about 1.04 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme3n1 : 1.04 123.32 7.71 61.66 0.00 325907.55 30504.03 392739.37 00:23:44.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme4n1 ended in about 1.03 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme4n1 : 1.03 186.77 11.67 62.26 0.00 235969.40 22878.02 282162.27 00:23:44.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme5n1 ended in about 1.04 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme5n1 : 1.04 122.69 7.67 61.34 0.00 311969.36 47900.86 278349.27 00:23:44.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.173 Job: Nvme6n1 ended in about 1.01 seconds with error 00:23:44.173 Verification LBA range: start 0x0 length 0x400 00:23:44.173 Nvme6n1 : 1.01 126.51 7.91 63.25 0.00 293555.43 4617.31 379393.86 00:23:44.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.174 Job: Nvme7n1 ended in about 1.05 seconds with error 00:23:44.174 Verification LBA range: start 0x0 length 0x400 00:23:44.174 Nvme7n1 : 1.05 122.35 7.65 61.17 0.00 297140.60 31695.59 291694.78 00:23:44.174 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.174 Job: Nvme8n1 ended in about 1.05 seconds with error 00:23:44.174 Verification LBA range: start 0x0 length 0x400 00:23:44.174 Nvme8n1 : 1.05 121.76 7.61 60.88 0.00 290995.04 26333.56 339357.32 00:23:44.174 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.174 Job: Nvme9n1 ended in about 1.02 seconds with error 00:23:44.174 Verification LBA range: start 0x0 length 0x400 00:23:44.174 Nvme9n1 : 1.02 125.86 7.87 62.93 0.00 271834.84 3485.32 308853.29 00:23:44.174 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:44.174 Job: Nvme10n1 ended in about 1.01 seconds with error 00:23:44.174 Verification LBA range: start 0x0 length 0x400 00:23:44.174 Nvme10n1 : 1.01 126.78 7.92 63.39 0.00 261724.78 6106.76 333637.82 00:23:44.174 =================================================================================================================== 00:23:44.174 Total : 1303.74 81.48 620.74 0.00 294225.63 3485.32 392739.37 00:23:44.432 [2024-07-15 11:39:18.606437] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:44.432 [2024-07-15 11:39:18.606482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:44.432 [2024-07-15 11:39:18.606904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.432 [2024-07-15 11:39:18.607241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.607276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29cacb0 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.607289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cacb0 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.607568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.607582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29d59c0 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.607591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29d59c0 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.607750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.607764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29cdd10 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.607774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cdd10 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.608012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.608026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29caac0 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.608036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29caac0 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.608095] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.432 [2024-07-15 11:39:18.608452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:44.432 [2024-07-15 11:39:18.608731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.608748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2802120 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.608758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2802120 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.608773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cacb0 (9): Bad file descriptor 00:23:44.432 [2024-07-15 11:39:18.608794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29d59c0 (9): Bad file descriptor 00:23:44.432 [2024-07-15 11:39:18.608806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cdd10 (9): Bad file descriptor 00:23:44.432 [2024-07-15 11:39:18.608818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29caac0 (9): Bad file descriptor 00:23:44.432 [2024-07-15 11:39:18.608869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:44.432 [2024-07-15 11:39:18.608883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:44.432 [2024-07-15 11:39:18.608894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:44.432 [2024-07-15 11:39:18.608905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:44.432 [2024-07-15 11:39:18.609153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.432 [2024-07-15 11:39:18.609169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2832630 with addr=10.0.0.2, port=4420 00:23:44.432 [2024-07-15 11:39:18.609178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2832630 is same with the state(5) to be set 00:23:44.432 [2024-07-15 11:39:18.609190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2802120 (9): Bad file descriptor 00:23:44.432 [2024-07-15 11:39:18.609200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:44.432 [2024-07-15 11:39:18.609209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:44.432 [2024-07-15 11:39:18.609219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:44.432 [2024-07-15 11:39:18.609232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:44.432 [2024-07-15 11:39:18.609240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:44.432 [2024-07-15 11:39:18.609249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:44.432 [2024-07-15 11:39:18.609267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:44.432 [2024-07-15 11:39:18.609276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:44.432 [2024-07-15 11:39:18.609285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:44.432 [2024-07-15 11:39:18.609297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:44.432 [2024-07-15 11:39:18.609305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.609314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:44.433 [2024-07-15 11:39:18.609362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.609373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.609381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.609388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.609571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.433 [2024-07-15 11:39:18.609585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29c3250 with addr=10.0.0.2, port=4420 00:23:44.433 [2024-07-15 11:39:18.609594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29c3250 is same with the state(5) to be set 00:23:44.433 [2024-07-15 11:39:18.609839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.433 [2024-07-15 11:39:18.609852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2823de0 with addr=10.0.0.2, port=4420 00:23:44.433 [2024-07-15 11:39:18.609862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2823de0 is same with the state(5) to be set 00:23:44.433 [2024-07-15 11:39:18.610050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.433 [2024-07-15 11:39:18.610063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2303610 with addr=10.0.0.2, port=4420 00:23:44.433 [2024-07-15 11:39:18.610072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303610 is same with the state(5) to be set 00:23:44.433 [2024-07-15 11:39:18.610338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.433 [2024-07-15 11:39:18.610352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b1ff0 with addr=10.0.0.2, port=4420 00:23:44.433 [2024-07-15 11:39:18.610361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b1ff0 is same with the state(5) to be set 00:23:44.433 [2024-07-15 11:39:18.610373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2832630 (9): Bad file descriptor 00:23:44.433 [2024-07-15 11:39:18.610383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.610448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29c3250 (9): Bad file descriptor 00:23:44.433 [2024-07-15 11:39:18.610460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2823de0 (9): Bad file descriptor 00:23:44.433 [2024-07-15 11:39:18.610472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303610 (9): Bad file descriptor 00:23:44.433 [2024-07-15 11:39:18.610484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b1ff0 (9): Bad file descriptor 00:23:44.433 [2024-07-15 11:39:18.610494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.610551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:44.433 [2024-07-15 11:39:18.610650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:44.433 [2024-07-15 11:39:18.610659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:44.433 [2024-07-15 11:39:18.610692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.610702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.610710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.433 [2024-07-15 11:39:18.610718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.691 11:39:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:44.691 11:39:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2869112 00:23:45.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2869112) - No such process 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.626 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.626 rmmod nvme_tcp 00:23:45.626 rmmod nvme_fabrics 00:23:45.891 rmmod nvme_keyring 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.891 11:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.794 00:23:47.794 real 0m8.466s 00:23:47.794 user 0m21.880s 00:23:47.794 sys 0m1.494s 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.794 ************************************ 00:23:47.794 END TEST nvmf_shutdown_tc3 00:23:47.794 ************************************ 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:47.794 00:23:47.794 real 0m33.714s 00:23:47.794 user 1m27.292s 00:23:47.794 sys 0m9.068s 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.794 11:39:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:47.794 ************************************ 00:23:47.794 END TEST nvmf_shutdown 00:23:47.794 ************************************ 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:48.054 11:39:22 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.054 11:39:22 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.054 11:39:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:48.054 11:39:22 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.054 11:39:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.054 ************************************ 00:23:48.054 START TEST nvmf_multicontroller 00:23:48.054 ************************************ 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:48.054 * Looking for test storage... 00:23:48.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.054 11:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:54.624 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:54.624 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:54.624 Found net devices under 0000:af:00.0: cvl_0_0 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.624 11:39:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:54.624 Found net devices under 0000:af:00.1: cvl_0_1 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.624 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:23:54.625 00:23:54.625 --- 10.0.0.2 ping statistics --- 00:23:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.625 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:23:54.625 00:23:54.625 --- 10.0.0.1 ping statistics --- 00:23:54.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.625 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2873420 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2873420 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2873420 ']' 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.625 11:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 [2024-07-15 11:39:28.368513] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:54.625 [2024-07-15 11:39:28.368567] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.625 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.625 [2024-07-15 11:39:28.456337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:54.625 [2024-07-15 11:39:28.560632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.625 [2024-07-15 11:39:28.560682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.625 [2024-07-15 11:39:28.560695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.625 [2024-07-15 11:39:28.560706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.625 [2024-07-15 11:39:28.560715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.625 [2024-07-15 11:39:28.560783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.625 [2024-07-15 11:39:28.560918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.625 [2024-07-15 11:39:28.560920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.884 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.884 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:54.884 11:39:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.884 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.884 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 [2024-07-15 11:39:29.362468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 Malloc0 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 [2024-07-15 11:39:29.438660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 [2024-07-15 11:39:29.446570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 Malloc1 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2873697 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2873697 /var/tmp/bdevperf.sock 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2873697 ']' 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.144 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.403 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.403 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:55.403 11:39:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:55.403 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.403 11:39:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.663 NVMe0n1 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.663 1 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.663 request: 00:23:55.663 { 00:23:55.663 "name": "NVMe0", 00:23:55.663 "trtype": "tcp", 00:23:55.663 "traddr": "10.0.0.2", 00:23:55.663 "adrfam": "ipv4", 00:23:55.663 "trsvcid": "4420", 00:23:55.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.663 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:55.663 "hostaddr": "10.0.0.2", 00:23:55.663 "hostsvcid": "60000", 00:23:55.663 "prchk_reftag": false, 00:23:55.663 "prchk_guard": false, 00:23:55.663 "hdgst": false, 00:23:55.663 "ddgst": false, 00:23:55.663 "method": "bdev_nvme_attach_controller", 00:23:55.663 "req_id": 1 00:23:55.663 } 00:23:55.663 Got JSON-RPC error response 00:23:55.663 response: 00:23:55.663 { 00:23:55.663 "code": -114, 00:23:55.663 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.663 } 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.663 request: 00:23:55.663 { 00:23:55.663 "name": "NVMe0", 00:23:55.663 "trtype": "tcp", 00:23:55.663 "traddr": "10.0.0.2", 00:23:55.663 "adrfam": "ipv4", 00:23:55.663 "trsvcid": "4420", 00:23:55.663 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.663 "hostaddr": "10.0.0.2", 00:23:55.663 "hostsvcid": "60000", 00:23:55.663 "prchk_reftag": false, 00:23:55.663 "prchk_guard": false, 00:23:55.663 "hdgst": false, 00:23:55.663 "ddgst": false, 00:23:55.663 "method": "bdev_nvme_attach_controller", 00:23:55.663 "req_id": 1 00:23:55.663 } 00:23:55.663 Got JSON-RPC error response 00:23:55.663 response: 00:23:55.663 { 00:23:55.663 "code": -114, 00:23:55.663 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.663 } 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.663 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.663 request: 00:23:55.663 { 00:23:55.663 "name": "NVMe0", 00:23:55.663 "trtype": "tcp", 00:23:55.663 "traddr": "10.0.0.2", 00:23:55.663 "adrfam": "ipv4", 00:23:55.663 "trsvcid": "4420", 00:23:55.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.664 "hostaddr": "10.0.0.2", 00:23:55.664 "hostsvcid": "60000", 00:23:55.664 "prchk_reftag": false, 00:23:55.664 "prchk_guard": false, 00:23:55.664 "hdgst": false, 00:23:55.664 "ddgst": false, 00:23:55.664 "multipath": "disable", 00:23:55.664 "method": "bdev_nvme_attach_controller", 00:23:55.664 "req_id": 1 00:23:55.664 } 00:23:55.664 Got JSON-RPC error response 00:23:55.664 response: 00:23:55.664 { 00:23:55.664 "code": -114, 00:23:55.664 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:55.664 } 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.664 request: 00:23:55.664 { 00:23:55.664 "name": "NVMe0", 00:23:55.664 "trtype": "tcp", 00:23:55.664 "traddr": "10.0.0.2", 00:23:55.664 "adrfam": "ipv4", 00:23:55.664 "trsvcid": "4420", 00:23:55.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.664 "hostaddr": "10.0.0.2", 00:23:55.664 "hostsvcid": "60000", 00:23:55.664 "prchk_reftag": false, 00:23:55.664 "prchk_guard": false, 00:23:55.664 "hdgst": false, 00:23:55.664 "ddgst": false, 00:23:55.664 "multipath": "failover", 00:23:55.664 "method": "bdev_nvme_attach_controller", 00:23:55.664 "req_id": 1 00:23:55.664 } 00:23:55.664 Got JSON-RPC error response 00:23:55.664 response: 00:23:55.664 { 00:23:55.664 "code": -114, 00:23:55.664 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:55.664 } 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.664 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.923 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.182 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:56.182 11:39:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.561 0 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2873697 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2873697 ']' 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2873697 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873697 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:57.561 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873697' 00:23:57.562 killing process with pid 2873697 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2873697 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2873697 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:57.562 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:57.562 [2024-07-15 11:39:29.554471] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:23:57.562 [2024-07-15 11:39:29.554535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873697 ] 00:23:57.562 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.562 [2024-07-15 11:39:29.633524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.562 [2024-07-15 11:39:29.720574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.562 [2024-07-15 11:39:30.459286] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b20db408-3827-41c3-b65e-5ee62d8e30ba already exists 00:23:57.562 [2024-07-15 11:39:30.459322] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b20db408-3827-41c3-b65e-5ee62d8e30ba alias for bdev NVMe1n1 00:23:57.562 [2024-07-15 11:39:30.459334] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:57.562 Running I/O for 1 seconds... 00:23:57.562 00:23:57.562 Latency(us) 00:23:57.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.562 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:57.562 NVMe0n1 : 1.01 7917.08 30.93 0.00 0.00 16128.86 2234.18 29074.15 00:23:57.562 =================================================================================================================== 00:23:57.562 Total : 7917.08 30.93 0.00 0.00 16128.86 2234.18 29074.15 00:23:57.562 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.562 00:23:57.562 Latency(us) 00:23:57.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.562 =================================================================================================================== 00:23:57.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.562 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.562 11:39:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.562 rmmod nvme_tcp 00:23:57.562 rmmod nvme_fabrics 00:23:57.562 rmmod nvme_keyring 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2873420 ']' 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2873420 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2873420 ']' 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2873420 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.562 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873420 00:23:57.821 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:57.821 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:57.821 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873420' 00:23:57.821 killing process with pid 2873420 00:23:57.822 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2873420 00:23:57.822 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2873420 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.081 11:39:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.616 11:39:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.616 00:24:00.616 real 0m12.153s 00:24:00.616 user 0m15.710s 00:24:00.616 sys 0m5.283s 00:24:00.616 11:39:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.616 11:39:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.616 ************************************ 00:24:00.616 END TEST nvmf_multicontroller 00:24:00.616 ************************************ 00:24:00.616 11:39:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.616 11:39:34 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:00.616 11:39:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.616 11:39:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.616 11:39:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.616 ************************************ 00:24:00.616 START TEST nvmf_aer 00:24:00.616 ************************************ 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:00.616 * Looking for test storage... 00:24:00.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.616 11:39:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:07.189 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:07.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:07.189 Found net devices under 0000:af:00.0: cvl_0_0 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:07.189 Found net devices under 0000:af:00.1: cvl_0_1 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.189 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:07.190 00:24:07.190 --- 10.0.0.2 ping statistics --- 00:24:07.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.190 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:24:07.190 00:24:07.190 --- 10.0.0.1 ping statistics --- 00:24:07.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.190 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2877705 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2877705 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2877705 ']' 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.190 11:39:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.190 [2024-07-15 11:39:40.732265] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:07.190 [2024-07-15 11:39:40.732320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.190 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.190 [2024-07-15 11:39:40.821652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.190 [2024-07-15 11:39:40.913791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.190 [2024-07-15 11:39:40.913835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.190 [2024-07-15 11:39:40.913845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.190 [2024-07-15 11:39:40.913856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.190 [2024-07-15 11:39:40.913864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.190 [2024-07-15 11:39:40.913916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.190 [2024-07-15 11:39:40.913962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.190 [2024-07-15 11:39:40.914086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.190 [2024-07-15 11:39:40.914087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 [2024-07-15 11:39:41.735445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 Malloc0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 [2024-07-15 11:39:41.795724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 [ 00:24:07.448 { 00:24:07.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:07.448 "subtype": "Discovery", 00:24:07.448 "listen_addresses": [], 00:24:07.448 "allow_any_host": true, 00:24:07.448 "hosts": [] 00:24:07.448 }, 00:24:07.448 { 00:24:07.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.448 "subtype": "NVMe", 00:24:07.448 "listen_addresses": [ 00:24:07.448 { 00:24:07.448 "trtype": "TCP", 00:24:07.448 "adrfam": "IPv4", 00:24:07.448 "traddr": "10.0.0.2", 00:24:07.448 "trsvcid": "4420" 00:24:07.448 } 00:24:07.448 ], 00:24:07.448 "allow_any_host": true, 00:24:07.448 "hosts": [], 00:24:07.448 "serial_number": "SPDK00000000000001", 00:24:07.448 "model_number": "SPDK bdev Controller", 00:24:07.448 "max_namespaces": 2, 00:24:07.448 "min_cntlid": 1, 00:24:07.448 "max_cntlid": 65519, 00:24:07.448 "namespaces": [ 00:24:07.448 { 00:24:07.448 "nsid": 1, 00:24:07.448 "bdev_name": "Malloc0", 00:24:07.448 "name": "Malloc0", 00:24:07.448 "nguid": "1051BC6FA75F411CAB3F5B5A1B74BD70", 00:24:07.448 "uuid": "1051bc6f-a75f-411c-ab3f-5b5a1b74bd70" 00:24:07.448 } 00:24:07.448 ] 00:24:07.448 } 00:24:07.448 ] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2877998 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:07.448 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:07.448 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.706 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:07.706 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:07.706 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:07.706 11:39:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.706 Malloc1 00:24:07.706 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 [ 00:24:07.707 { 00:24:07.707 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:07.707 "subtype": "Discovery", 00:24:07.707 "listen_addresses": [], 00:24:07.707 "allow_any_host": true, 00:24:07.707 "hosts": [] 00:24:07.707 }, 00:24:07.707 { 00:24:07.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.707 "subtype": "NVMe", 00:24:07.707 "listen_addresses": [ 00:24:07.707 { 00:24:07.707 "trtype": "TCP", 00:24:07.707 "adrfam": "IPv4", 00:24:07.707 "traddr": "10.0.0.2", 00:24:07.707 "trsvcid": "4420" 00:24:07.707 } 00:24:07.707 ], 00:24:07.707 "allow_any_host": true, 00:24:07.707 "hosts": [], 00:24:07.707 "serial_number": "SPDK00000000000001", 00:24:07.707 "model_number": "SPDK bdev Controller", 00:24:07.707 "max_namespaces": 2, 00:24:07.707 "min_cntlid": 1, 00:24:07.707 "max_cntlid": 65519, 00:24:07.707 "namespaces": [ 00:24:07.707 { 00:24:07.707 "nsid": 1, 00:24:07.707 "bdev_name": "Malloc0", 00:24:07.707 "name": "Malloc0", 00:24:07.707 "nguid": "1051BC6FA75F411CAB3F5B5A1B74BD70", 00:24:07.707 Asynchronous Event Request test 00:24:07.707 Attaching to 10.0.0.2 00:24:07.707 Attached to 10.0.0.2 00:24:07.707 Registering asynchronous event callbacks... 00:24:07.707 Starting namespace attribute notice tests for all controllers... 00:24:07.707 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:07.707 aer_cb - Changed Namespace 00:24:07.707 Cleaning up... 00:24:07.707 "uuid": "1051bc6f-a75f-411c-ab3f-5b5a1b74bd70" 00:24:07.707 }, 00:24:07.707 { 00:24:07.707 "nsid": 2, 00:24:07.707 "bdev_name": "Malloc1", 00:24:07.707 "name": "Malloc1", 00:24:07.707 "nguid": "90137614657D4AE9919E8EF1026AC54F", 00:24:07.707 "uuid": "90137614-657d-4ae9-919e-8ef1026ac54f" 00:24:07.707 } 00:24:07.707 ] 00:24:07.707 } 00:24:07.707 ] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2877998 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.707 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.707 rmmod nvme_tcp 00:24:07.965 rmmod nvme_fabrics 00:24:07.965 rmmod nvme_keyring 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2877705 ']' 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2877705 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2877705 ']' 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2877705 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2877705 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2877705' 00:24:07.965 killing process with pid 2877705 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2877705 00:24:07.965 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2877705 00:24:08.222 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.222 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.222 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.222 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.222 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.223 11:39:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.223 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.223 11:39:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.122 11:39:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.122 00:24:10.122 real 0m9.957s 00:24:10.122 user 0m7.979s 00:24:10.122 sys 0m4.986s 00:24:10.122 11:39:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.122 11:39:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.122 ************************************ 00:24:10.122 END TEST nvmf_aer 00:24:10.122 ************************************ 00:24:10.122 11:39:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:10.122 11:39:44 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:10.122 11:39:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:10.122 11:39:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.122 11:39:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.382 ************************************ 00:24:10.382 START TEST nvmf_async_init 00:24:10.382 ************************************ 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:10.382 * Looking for test storage... 00:24:10.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.382 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7887a548666f4080bf189f0391830d47 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.383 11:39:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.032 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:17.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:17.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:17.033 Found net devices under 0000:af:00.0: cvl_0_0 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:17.033 Found net devices under 0000:af:00.1: cvl_0_1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:24:17.033 00:24:17.033 --- 10.0.0.2 ping statistics --- 00:24:17.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.033 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:24:17.033 00:24:17.033 --- 10.0.0.1 ping statistics --- 00:24:17.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.033 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2881549 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2881549 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2881549 ']' 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.033 11:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 [2024-07-15 11:39:50.657271] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:17.033 [2024-07-15 11:39:50.657334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.033 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.033 [2024-07-15 11:39:50.743937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.033 [2024-07-15 11:39:50.837100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.033 [2024-07-15 11:39:50.837141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.033 [2024-07-15 11:39:50.837151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.033 [2024-07-15 11:39:50.837160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.033 [2024-07-15 11:39:50.837168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.033 [2024-07-15 11:39:50.837188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 [2024-07-15 11:39:51.647412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 null0 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7887a548666f4080bf189f0391830d47 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 [2024-07-15 11:39:51.687625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.293 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.552 nvme0n1 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.552 [ 00:24:17.552 { 00:24:17.552 "name": "nvme0n1", 00:24:17.552 "aliases": [ 00:24:17.552 "7887a548-666f-4080-bf18-9f0391830d47" 00:24:17.552 ], 00:24:17.552 "product_name": "NVMe disk", 00:24:17.552 "block_size": 512, 00:24:17.552 "num_blocks": 2097152, 00:24:17.552 "uuid": "7887a548-666f-4080-bf18-9f0391830d47", 00:24:17.552 "assigned_rate_limits": { 00:24:17.552 "rw_ios_per_sec": 0, 00:24:17.552 "rw_mbytes_per_sec": 0, 00:24:17.552 "r_mbytes_per_sec": 0, 00:24:17.552 "w_mbytes_per_sec": 0 00:24:17.552 }, 00:24:17.552 "claimed": false, 00:24:17.552 "zoned": false, 00:24:17.552 "supported_io_types": { 00:24:17.552 "read": true, 00:24:17.552 "write": true, 00:24:17.552 "unmap": false, 00:24:17.552 "flush": true, 00:24:17.552 "reset": true, 00:24:17.552 "nvme_admin": true, 00:24:17.552 "nvme_io": true, 00:24:17.552 "nvme_io_md": false, 00:24:17.552 "write_zeroes": true, 00:24:17.552 "zcopy": false, 00:24:17.552 "get_zone_info": false, 00:24:17.552 "zone_management": false, 00:24:17.552 "zone_append": false, 00:24:17.552 "compare": true, 00:24:17.552 "compare_and_write": true, 00:24:17.552 "abort": true, 00:24:17.552 "seek_hole": false, 00:24:17.552 "seek_data": false, 00:24:17.552 "copy": true, 00:24:17.552 "nvme_iov_md": false 00:24:17.552 }, 00:24:17.552 "memory_domains": [ 00:24:17.552 { 00:24:17.552 "dma_device_id": "system", 00:24:17.552 "dma_device_type": 1 00:24:17.552 } 00:24:17.552 ], 00:24:17.552 "driver_specific": { 00:24:17.552 "nvme": [ 00:24:17.552 { 00:24:17.552 "trid": { 00:24:17.552 "trtype": "TCP", 00:24:17.552 "adrfam": "IPv4", 00:24:17.552 "traddr": "10.0.0.2", 00:24:17.552 "trsvcid": "4420", 00:24:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.552 }, 00:24:17.552 "ctrlr_data": { 00:24:17.552 "cntlid": 1, 00:24:17.552 "vendor_id": "0x8086", 00:24:17.552 "model_number": "SPDK bdev Controller", 00:24:17.552 "serial_number": "00000000000000000000", 00:24:17.552 "firmware_revision": "24.09", 00:24:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.552 "oacs": { 00:24:17.552 "security": 0, 00:24:17.552 "format": 0, 00:24:17.552 "firmware": 0, 00:24:17.552 "ns_manage": 0 00:24:17.552 }, 00:24:17.552 "multi_ctrlr": true, 00:24:17.552 "ana_reporting": false 00:24:17.552 }, 00:24:17.552 "vs": { 00:24:17.552 "nvme_version": "1.3" 00:24:17.552 }, 00:24:17.552 "ns_data": { 00:24:17.552 "id": 1, 00:24:17.552 "can_share": true 00:24:17.552 } 00:24:17.552 } 00:24:17.552 ], 00:24:17.552 "mp_policy": "active_passive" 00:24:17.552 } 00:24:17.552 } 00:24:17.552 ] 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.552 11:39:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.553 [2024-07-15 11:39:51.944728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:17.553 [2024-07-15 11:39:51.944801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150eb50 (9): Bad file descriptor 00:24:17.812 [2024-07-15 11:39:52.076376] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:17.812 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.812 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.812 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.812 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.812 [ 00:24:17.812 { 00:24:17.812 "name": "nvme0n1", 00:24:17.812 "aliases": [ 00:24:17.812 "7887a548-666f-4080-bf18-9f0391830d47" 00:24:17.812 ], 00:24:17.812 "product_name": "NVMe disk", 00:24:17.812 "block_size": 512, 00:24:17.812 "num_blocks": 2097152, 00:24:17.812 "uuid": "7887a548-666f-4080-bf18-9f0391830d47", 00:24:17.812 "assigned_rate_limits": { 00:24:17.812 "rw_ios_per_sec": 0, 00:24:17.812 "rw_mbytes_per_sec": 0, 00:24:17.812 "r_mbytes_per_sec": 0, 00:24:17.812 "w_mbytes_per_sec": 0 00:24:17.812 }, 00:24:17.812 "claimed": false, 00:24:17.812 "zoned": false, 00:24:17.812 "supported_io_types": { 00:24:17.812 "read": true, 00:24:17.812 "write": true, 00:24:17.812 "unmap": false, 00:24:17.812 "flush": true, 00:24:17.812 "reset": true, 00:24:17.812 "nvme_admin": true, 00:24:17.812 "nvme_io": true, 00:24:17.812 "nvme_io_md": false, 00:24:17.812 "write_zeroes": true, 00:24:17.812 "zcopy": false, 00:24:17.812 "get_zone_info": false, 00:24:17.812 "zone_management": false, 00:24:17.812 "zone_append": false, 00:24:17.812 "compare": true, 00:24:17.812 "compare_and_write": true, 00:24:17.812 "abort": true, 00:24:17.812 "seek_hole": false, 00:24:17.812 "seek_data": false, 00:24:17.812 "copy": true, 00:24:17.812 "nvme_iov_md": false 00:24:17.812 }, 00:24:17.812 "memory_domains": [ 00:24:17.812 { 00:24:17.812 "dma_device_id": "system", 00:24:17.813 "dma_device_type": 1 00:24:17.813 } 00:24:17.813 ], 00:24:17.813 "driver_specific": { 00:24:17.813 "nvme": [ 00:24:17.813 { 00:24:17.813 "trid": { 00:24:17.813 "trtype": "TCP", 00:24:17.813 "adrfam": "IPv4", 00:24:17.813 "traddr": "10.0.0.2", 00:24:17.813 "trsvcid": "4420", 00:24:17.813 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.813 }, 00:24:17.813 "ctrlr_data": { 00:24:17.813 "cntlid": 2, 00:24:17.813 "vendor_id": "0x8086", 00:24:17.813 "model_number": "SPDK bdev Controller", 00:24:17.813 "serial_number": "00000000000000000000", 00:24:17.813 "firmware_revision": "24.09", 00:24:17.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.813 "oacs": { 00:24:17.813 "security": 0, 00:24:17.813 "format": 0, 00:24:17.813 "firmware": 0, 00:24:17.813 "ns_manage": 0 00:24:17.813 }, 00:24:17.813 "multi_ctrlr": true, 00:24:17.813 "ana_reporting": false 00:24:17.813 }, 00:24:17.813 "vs": { 00:24:17.813 "nvme_version": "1.3" 00:24:17.813 }, 00:24:17.813 "ns_data": { 00:24:17.813 "id": 1, 00:24:17.813 "can_share": true 00:24:17.813 } 00:24:17.813 } 00:24:17.813 ], 00:24:17.813 "mp_policy": "active_passive" 00:24:17.813 } 00:24:17.813 } 00:24:17.813 ] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JUWxzfwCSp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JUWxzfwCSp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 [2024-07-15 11:39:52.137407] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.813 [2024-07-15 11:39:52.137550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JUWxzfwCSp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 [2024-07-15 11:39:52.145420] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JUWxzfwCSp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 [2024-07-15 11:39:52.153464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.813 [2024-07-15 11:39:52.153509] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:17.813 nvme0n1 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 [ 00:24:17.813 { 00:24:17.813 "name": "nvme0n1", 00:24:17.813 "aliases": [ 00:24:17.813 "7887a548-666f-4080-bf18-9f0391830d47" 00:24:17.813 ], 00:24:17.813 "product_name": "NVMe disk", 00:24:17.813 "block_size": 512, 00:24:17.813 "num_blocks": 2097152, 00:24:17.813 "uuid": "7887a548-666f-4080-bf18-9f0391830d47", 00:24:17.813 "assigned_rate_limits": { 00:24:17.813 "rw_ios_per_sec": 0, 00:24:17.813 "rw_mbytes_per_sec": 0, 00:24:17.813 "r_mbytes_per_sec": 0, 00:24:17.813 "w_mbytes_per_sec": 0 00:24:17.813 }, 00:24:17.813 "claimed": false, 00:24:17.813 "zoned": false, 00:24:17.813 "supported_io_types": { 00:24:17.813 "read": true, 00:24:17.813 "write": true, 00:24:17.813 "unmap": false, 00:24:17.813 "flush": true, 00:24:17.813 "reset": true, 00:24:17.813 "nvme_admin": true, 00:24:17.813 "nvme_io": true, 00:24:17.813 "nvme_io_md": false, 00:24:17.813 "write_zeroes": true, 00:24:17.813 "zcopy": false, 00:24:17.813 "get_zone_info": false, 00:24:17.813 "zone_management": false, 00:24:17.813 "zone_append": false, 00:24:17.813 "compare": true, 00:24:17.813 "compare_and_write": true, 00:24:17.813 "abort": true, 00:24:17.813 "seek_hole": false, 00:24:17.813 "seek_data": false, 00:24:17.813 "copy": true, 00:24:17.813 "nvme_iov_md": false 00:24:17.813 }, 00:24:17.813 "memory_domains": [ 00:24:17.813 { 00:24:17.813 "dma_device_id": "system", 00:24:17.813 "dma_device_type": 1 00:24:17.813 } 00:24:17.813 ], 00:24:17.813 "driver_specific": { 00:24:17.813 "nvme": [ 00:24:17.813 { 00:24:17.813 "trid": { 00:24:17.813 "trtype": "TCP", 00:24:17.813 "adrfam": "IPv4", 00:24:17.813 "traddr": "10.0.0.2", 00:24:17.813 "trsvcid": "4421", 00:24:17.813 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.813 }, 00:24:17.813 "ctrlr_data": { 00:24:17.813 "cntlid": 3, 00:24:17.813 "vendor_id": "0x8086", 00:24:17.813 "model_number": "SPDK bdev Controller", 00:24:17.813 "serial_number": "00000000000000000000", 00:24:17.813 "firmware_revision": "24.09", 00:24:17.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.813 "oacs": { 00:24:17.813 "security": 0, 00:24:17.813 "format": 0, 00:24:17.813 "firmware": 0, 00:24:17.813 "ns_manage": 0 00:24:17.813 }, 00:24:17.813 "multi_ctrlr": true, 00:24:17.813 "ana_reporting": false 00:24:17.813 }, 00:24:17.813 "vs": { 00:24:17.813 "nvme_version": "1.3" 00:24:17.813 }, 00:24:17.813 "ns_data": { 00:24:17.813 "id": 1, 00:24:17.813 "can_share": true 00:24:17.813 } 00:24:17.813 } 00:24:17.813 ], 00:24:17.813 "mp_policy": "active_passive" 00:24:17.813 } 00:24:17.813 } 00:24:17.813 ] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.JUWxzfwCSp 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.813 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.813 rmmod nvme_tcp 00:24:18.072 rmmod nvme_fabrics 00:24:18.072 rmmod nvme_keyring 00:24:18.072 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.072 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2881549 ']' 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2881549 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2881549 ']' 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2881549 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881549 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881549' 00:24:18.073 killing process with pid 2881549 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2881549 00:24:18.073 [2024-07-15 11:39:52.375631] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:18.073 [2024-07-15 11:39:52.375663] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.073 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2881549 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.332 11:39:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.238 11:39:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.238 00:24:20.238 real 0m10.020s 00:24:20.238 user 0m3.868s 00:24:20.238 sys 0m4.817s 00:24:20.238 11:39:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.238 11:39:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 ************************************ 00:24:20.238 END TEST nvmf_async_init 00:24:20.238 ************************************ 00:24:20.238 11:39:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:20.239 11:39:54 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.239 11:39:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.239 11:39:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.239 11:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.498 ************************************ 00:24:20.498 START TEST dma 00:24:20.498 ************************************ 00:24:20.498 11:39:54 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.498 * Looking for test storage... 00:24:20.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.498 11:39:54 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.498 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.499 11:39:54 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.499 11:39:54 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.499 11:39:54 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.499 11:39:54 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 11:39:54 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 11:39:54 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 11:39:54 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:20.499 11:39:54 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.499 11:39:54 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.499 11:39:54 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:20.499 11:39:54 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:20.499 00:24:20.499 real 0m0.122s 00:24:20.499 user 0m0.060s 00:24:20.499 sys 0m0.071s 00:24:20.499 11:39:54 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.499 11:39:54 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 ************************************ 00:24:20.499 END TEST dma 00:24:20.499 ************************************ 00:24:20.499 11:39:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:20.499 11:39:54 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.499 11:39:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.499 11:39:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.499 11:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 ************************************ 00:24:20.499 START TEST nvmf_identify 00:24:20.499 ************************************ 00:24:20.499 11:39:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.759 * Looking for test storage... 00:24:20.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.759 11:39:54 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.759 11:39:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.759 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.760 11:39:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:27.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:27.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:27.330 Found net devices under 0000:af:00.0: cvl_0_0 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:27.330 Found net devices under 0000:af:00.1: cvl_0_1 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.330 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:27.331 00:24:27.331 --- 10.0.0.2 ping statistics --- 00:24:27.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.331 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:27.331 00:24:27.331 --- 10.0.0.1 ping statistics --- 00:24:27.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.331 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2885520 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2885520 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2885520 ']' 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.331 11:40:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.331 [2024-07-15 11:40:00.858405] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:27.331 [2024-07-15 11:40:00.858459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.331 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.331 [2024-07-15 11:40:00.939471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.331 [2024-07-15 11:40:01.031246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.331 [2024-07-15 11:40:01.031297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.331 [2024-07-15 11:40:01.031307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.331 [2024-07-15 11:40:01.031316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.331 [2024-07-15 11:40:01.031323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.331 [2024-07-15 11:40:01.035281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.331 [2024-07-15 11:40:01.035320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.331 [2024-07-15 11:40:01.035430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.331 [2024-07-15 11:40:01.035432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 [2024-07-15 11:40:01.811004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 Malloc0 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 [2024-07-15 11:40:01.907186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 [ 00:24:27.592 { 00:24:27.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.592 "subtype": "Discovery", 00:24:27.592 "listen_addresses": [ 00:24:27.592 { 00:24:27.592 "trtype": "TCP", 00:24:27.592 "adrfam": "IPv4", 00:24:27.592 "traddr": "10.0.0.2", 00:24:27.592 "trsvcid": "4420" 00:24:27.592 } 00:24:27.592 ], 00:24:27.592 "allow_any_host": true, 00:24:27.592 "hosts": [] 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.592 "subtype": "NVMe", 00:24:27.592 "listen_addresses": [ 00:24:27.592 { 00:24:27.592 "trtype": "TCP", 00:24:27.592 "adrfam": "IPv4", 00:24:27.592 "traddr": "10.0.0.2", 00:24:27.592 "trsvcid": "4420" 00:24:27.592 } 00:24:27.592 ], 00:24:27.592 "allow_any_host": true, 00:24:27.592 "hosts": [], 00:24:27.592 "serial_number": "SPDK00000000000001", 00:24:27.592 "model_number": "SPDK bdev Controller", 00:24:27.592 "max_namespaces": 32, 00:24:27.592 "min_cntlid": 1, 00:24:27.592 "max_cntlid": 65519, 00:24:27.592 "namespaces": [ 00:24:27.592 { 00:24:27.592 "nsid": 1, 00:24:27.592 "bdev_name": "Malloc0", 00:24:27.592 "name": "Malloc0", 00:24:27.592 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:27.592 "eui64": "ABCDEF0123456789", 00:24:27.592 "uuid": "d4d82978-c3a6-4c1b-b187-181a8f4dd467" 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.592 11:40:01 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:27.592 [2024-07-15 11:40:01.959459] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:27.592 [2024-07-15 11:40:01.959495] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885796 ] 00:24:27.592 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.592 [2024-07-15 11:40:01.997809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:27.592 [2024-07-15 11:40:01.997868] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.592 [2024-07-15 11:40:01.997875] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.592 [2024-07-15 11:40:01.997887] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.592 [2024-07-15 11:40:01.997895] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.592 [2024-07-15 11:40:01.998199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:27.592 [2024-07-15 11:40:01.998234] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8aaec0 0 00:24:27.592 [2024-07-15 11:40:02.012269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.592 [2024-07-15 11:40:02.012282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.592 [2024-07-15 11:40:02.012288] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.592 [2024-07-15 11:40:02.012293] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.592 [2024-07-15 11:40:02.012337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.592 [2024-07-15 11:40:02.012344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.592 [2024-07-15 11:40:02.012350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.592 [2024-07-15 11:40:02.012366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.592 [2024-07-15 11:40:02.012385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.592 [2024-07-15 11:40:02.019266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.592 [2024-07-15 11:40:02.019279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.592 [2024-07-15 11:40:02.019284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.592 [2024-07-15 11:40:02.019293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.592 [2024-07-15 11:40:02.019310] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.592 [2024-07-15 11:40:02.019318] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:27.592 [2024-07-15 11:40:02.019325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:27.592 [2024-07-15 11:40:02.019342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.592 [2024-07-15 11:40:02.019347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.592 [2024-07-15 11:40:02.019352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.592 [2024-07-15 11:40:02.019362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.019379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.019604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.019613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.019617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.019628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:27.593 [2024-07-15 11:40:02.019638] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:27.593 [2024-07-15 11:40:02.019647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.019666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.019680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.019789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.019797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.019801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.019813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:27.593 [2024-07-15 11:40:02.019823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.019831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.019849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.019862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.019968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.019977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.019982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.019986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.019996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.020007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.020025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.020039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.020150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.020158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.020162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.020173] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:27.593 [2024-07-15 11:40:02.020179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.020188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.020295] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:27.593 [2024-07-15 11:40:02.020302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.020312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.020330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.020344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.020515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.020523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.020528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.020538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.593 [2024-07-15 11:40:02.020549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.020568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.020581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.020686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.020694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.020699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.020711] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.593 [2024-07-15 11:40:02.020717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:27.593 [2024-07-15 11:40:02.020726] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:27.593 [2024-07-15 11:40:02.020736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.593 [2024-07-15 11:40:02.020748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.020762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.593 [2024-07-15 11:40:02.020776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.020927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.593 [2024-07-15 11:40:02.020936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.593 [2024-07-15 11:40:02.020941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020946] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8aaec0): datao=0, datal=4096, cccid=0 00:24:27.593 [2024-07-15 11:40:02.020952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92de40) on tqpair(0x8aaec0): expected_datao=0, payload_size=4096 00:24:27.593 [2024-07-15 11:40:02.020957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020967] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.020972] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.021016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.021020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.021034] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:27.593 [2024-07-15 11:40:02.021043] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:27.593 [2024-07-15 11:40:02.021049] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:27.593 [2024-07-15 11:40:02.021055] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:27.593 [2024-07-15 11:40:02.021061] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:27.593 [2024-07-15 11:40:02.021067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:27.593 [2024-07-15 11:40:02.021077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.593 [2024-07-15 11:40:02.021086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.021107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.593 [2024-07-15 11:40:02.021122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.593 [2024-07-15 11:40:02.021228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.593 [2024-07-15 11:40:02.021237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.593 [2024-07-15 11:40:02.021241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.593 [2024-07-15 11:40:02.021263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.021281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.593 [2024-07-15 11:40:02.021289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.593 [2024-07-15 11:40:02.021298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8aaec0) 00:24:27.593 [2024-07-15 11:40:02.021305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.593 [2024-07-15 11:40:02.021313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8aaec0) 00:24:27.594 [2024-07-15 11:40:02.021329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.594 [2024-07-15 11:40:02.021337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.594 [2024-07-15 11:40:02.021353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.594 [2024-07-15 11:40:02.021359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.594 [2024-07-15 11:40:02.021373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.594 [2024-07-15 11:40:02.021381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8aaec0) 00:24:27.594 [2024-07-15 11:40:02.021394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.594 [2024-07-15 11:40:02.021411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92de40, cid 0, qid 0 00:24:27.594 [2024-07-15 11:40:02.021417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dfc0, cid 1, qid 0 00:24:27.594 [2024-07-15 11:40:02.021423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e140, cid 2, qid 0 00:24:27.594 [2024-07-15 11:40:02.021430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.594 [2024-07-15 11:40:02.021435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e440, cid 4, qid 0 00:24:27.594 [2024-07-15 11:40:02.021610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.594 [2024-07-15 11:40:02.021619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.594 [2024-07-15 11:40:02.021626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e440) on tqpair=0x8aaec0 00:24:27.594 [2024-07-15 11:40:02.021638] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:27.594 [2024-07-15 11:40:02.021645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:27.594 [2024-07-15 11:40:02.021658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8aaec0) 00:24:27.594 [2024-07-15 11:40:02.021672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.594 [2024-07-15 11:40:02.021685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e440, cid 4, qid 0 00:24:27.594 [2024-07-15 11:40:02.021833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.594 [2024-07-15 11:40:02.021840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.594 [2024-07-15 11:40:02.021845] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021849] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8aaec0): datao=0, datal=4096, cccid=4 00:24:27.594 [2024-07-15 11:40:02.021855] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92e440) on tqpair(0x8aaec0): expected_datao=0, payload_size=4096 00:24:27.594 [2024-07-15 11:40:02.021861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021875] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.594 [2024-07-15 11:40:02.021880] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.859 [2024-07-15 11:40:02.063479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.859 [2024-07-15 11:40:02.063484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e440) on tqpair=0x8aaec0 00:24:27.859 [2024-07-15 11:40:02.063506] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:27.859 [2024-07-15 11:40:02.063535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8aaec0) 00:24:27.859 [2024-07-15 11:40:02.063551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.859 [2024-07-15 11:40:02.063560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8aaec0) 00:24:27.859 [2024-07-15 11:40:02.063578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.859 [2024-07-15 11:40:02.063598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e440, cid 4, qid 0 00:24:27.859 [2024-07-15 11:40:02.063605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e5c0, cid 5, qid 0 00:24:27.859 [2024-07-15 11:40:02.063864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.859 [2024-07-15 11:40:02.063873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.859 [2024-07-15 11:40:02.063877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063882] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8aaec0): datao=0, datal=1024, cccid=4 00:24:27.859 [2024-07-15 11:40:02.063888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92e440) on tqpair(0x8aaec0): expected_datao=0, payload_size=1024 00:24:27.859 [2024-07-15 11:40:02.063897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063910] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.859 [2024-07-15 11:40:02.063925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.859 [2024-07-15 11:40:02.063929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.063934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e5c0) on tqpair=0x8aaec0 00:24:27.859 [2024-07-15 11:40:02.105427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.859 [2024-07-15 11:40:02.105441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.859 [2024-07-15 11:40:02.105446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e440) on tqpair=0x8aaec0 00:24:27.859 [2024-07-15 11:40:02.105474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8aaec0) 00:24:27.859 [2024-07-15 11:40:02.105490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.859 [2024-07-15 11:40:02.105511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e440, cid 4, qid 0 00:24:27.859 [2024-07-15 11:40:02.105679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.859 [2024-07-15 11:40:02.105688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.859 [2024-07-15 11:40:02.105692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105697] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8aaec0): datao=0, datal=3072, cccid=4 00:24:27.859 [2024-07-15 11:40:02.105702] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92e440) on tqpair(0x8aaec0): expected_datao=0, payload_size=3072 00:24:27.859 [2024-07-15 11:40:02.105708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.859 [2024-07-15 11:40:02.105754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.859 [2024-07-15 11:40:02.105758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e440) on tqpair=0x8aaec0 00:24:27.859 [2024-07-15 11:40:02.105774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8aaec0) 00:24:27.859 [2024-07-15 11:40:02.105787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.859 [2024-07-15 11:40:02.105806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e440, cid 4, qid 0 00:24:27.859 [2024-07-15 11:40:02.105943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.859 [2024-07-15 11:40:02.105951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.859 [2024-07-15 11:40:02.105956] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105960] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8aaec0): datao=0, datal=8, cccid=4 00:24:27.859 [2024-07-15 11:40:02.105966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92e440) on tqpair(0x8aaec0): expected_datao=0, payload_size=8 00:24:27.859 [2024-07-15 11:40:02.105975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105983] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.105987] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.150268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.859 [2024-07-15 11:40:02.150280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.859 [2024-07-15 11:40:02.150285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.859 [2024-07-15 11:40:02.150290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e440) on tqpair=0x8aaec0 00:24:27.859 ===================================================== 00:24:27.859 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.859 ===================================================== 00:24:27.859 Controller Capabilities/Features 00:24:27.859 ================================ 00:24:27.859 Vendor ID: 0000 00:24:27.859 Subsystem Vendor ID: 0000 00:24:27.859 Serial Number: .................... 00:24:27.859 Model Number: ........................................ 00:24:27.859 Firmware Version: 24.09 00:24:27.859 Recommended Arb Burst: 0 00:24:27.859 IEEE OUI Identifier: 00 00 00 00:24:27.859 Multi-path I/O 00:24:27.859 May have multiple subsystem ports: No 00:24:27.859 May have multiple controllers: No 00:24:27.859 Associated with SR-IOV VF: No 00:24:27.859 Max Data Transfer Size: 131072 00:24:27.859 Max Number of Namespaces: 0 00:24:27.859 Max Number of I/O Queues: 1024 00:24:27.859 NVMe Specification Version (VS): 1.3 00:24:27.859 NVMe Specification Version (Identify): 1.3 00:24:27.859 Maximum Queue Entries: 128 00:24:27.859 Contiguous Queues Required: Yes 00:24:27.859 Arbitration Mechanisms Supported 00:24:27.859 Weighted Round Robin: Not Supported 00:24:27.859 Vendor Specific: Not Supported 00:24:27.859 Reset Timeout: 15000 ms 00:24:27.859 Doorbell Stride: 4 bytes 00:24:27.859 NVM Subsystem Reset: Not Supported 00:24:27.859 Command Sets Supported 00:24:27.859 NVM Command Set: Supported 00:24:27.859 Boot Partition: Not Supported 00:24:27.859 Memory Page Size Minimum: 4096 bytes 00:24:27.859 Memory Page Size Maximum: 4096 bytes 00:24:27.859 Persistent Memory Region: Not Supported 00:24:27.859 Optional Asynchronous Events Supported 00:24:27.859 Namespace Attribute Notices: Not Supported 00:24:27.859 Firmware Activation Notices: Not Supported 00:24:27.859 ANA Change Notices: Not Supported 00:24:27.859 PLE Aggregate Log Change Notices: Not Supported 00:24:27.859 LBA Status Info Alert Notices: Not Supported 00:24:27.859 EGE Aggregate Log Change Notices: Not Supported 00:24:27.859 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.859 Zone Descriptor Change Notices: Not Supported 00:24:27.859 Discovery Log Change Notices: Supported 00:24:27.859 Controller Attributes 00:24:27.859 128-bit Host Identifier: Not Supported 00:24:27.859 Non-Operational Permissive Mode: Not Supported 00:24:27.859 NVM Sets: Not Supported 00:24:27.859 Read Recovery Levels: Not Supported 00:24:27.859 Endurance Groups: Not Supported 00:24:27.859 Predictable Latency Mode: Not Supported 00:24:27.859 Traffic Based Keep ALive: Not Supported 00:24:27.859 Namespace Granularity: Not Supported 00:24:27.859 SQ Associations: Not Supported 00:24:27.859 UUID List: Not Supported 00:24:27.859 Multi-Domain Subsystem: Not Supported 00:24:27.859 Fixed Capacity Management: Not Supported 00:24:27.859 Variable Capacity Management: Not Supported 00:24:27.859 Delete Endurance Group: Not Supported 00:24:27.859 Delete NVM Set: Not Supported 00:24:27.859 Extended LBA Formats Supported: Not Supported 00:24:27.860 Flexible Data Placement Supported: Not Supported 00:24:27.860 00:24:27.860 Controller Memory Buffer Support 00:24:27.860 ================================ 00:24:27.860 Supported: No 00:24:27.860 00:24:27.860 Persistent Memory Region Support 00:24:27.860 ================================ 00:24:27.860 Supported: No 00:24:27.860 00:24:27.860 Admin Command Set Attributes 00:24:27.860 ============================ 00:24:27.860 Security Send/Receive: Not Supported 00:24:27.860 Format NVM: Not Supported 00:24:27.860 Firmware Activate/Download: Not Supported 00:24:27.860 Namespace Management: Not Supported 00:24:27.860 Device Self-Test: Not Supported 00:24:27.860 Directives: Not Supported 00:24:27.860 NVMe-MI: Not Supported 00:24:27.860 Virtualization Management: Not Supported 00:24:27.860 Doorbell Buffer Config: Not Supported 00:24:27.860 Get LBA Status Capability: Not Supported 00:24:27.860 Command & Feature Lockdown Capability: Not Supported 00:24:27.860 Abort Command Limit: 1 00:24:27.860 Async Event Request Limit: 4 00:24:27.860 Number of Firmware Slots: N/A 00:24:27.860 Firmware Slot 1 Read-Only: N/A 00:24:27.860 Firmware Activation Without Reset: N/A 00:24:27.860 Multiple Update Detection Support: N/A 00:24:27.860 Firmware Update Granularity: No Information Provided 00:24:27.860 Per-Namespace SMART Log: No 00:24:27.860 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.860 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.860 Command Effects Log Page: Not Supported 00:24:27.860 Get Log Page Extended Data: Supported 00:24:27.860 Telemetry Log Pages: Not Supported 00:24:27.860 Persistent Event Log Pages: Not Supported 00:24:27.860 Supported Log Pages Log Page: May Support 00:24:27.860 Commands Supported & Effects Log Page: Not Supported 00:24:27.860 Feature Identifiers & Effects Log Page:May Support 00:24:27.860 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.860 Data Area 4 for Telemetry Log: Not Supported 00:24:27.860 Error Log Page Entries Supported: 128 00:24:27.860 Keep Alive: Not Supported 00:24:27.860 00:24:27.860 NVM Command Set Attributes 00:24:27.860 ========================== 00:24:27.860 Submission Queue Entry Size 00:24:27.860 Max: 1 00:24:27.860 Min: 1 00:24:27.860 Completion Queue Entry Size 00:24:27.860 Max: 1 00:24:27.860 Min: 1 00:24:27.860 Number of Namespaces: 0 00:24:27.860 Compare Command: Not Supported 00:24:27.860 Write Uncorrectable Command: Not Supported 00:24:27.860 Dataset Management Command: Not Supported 00:24:27.860 Write Zeroes Command: Not Supported 00:24:27.860 Set Features Save Field: Not Supported 00:24:27.860 Reservations: Not Supported 00:24:27.860 Timestamp: Not Supported 00:24:27.860 Copy: Not Supported 00:24:27.860 Volatile Write Cache: Not Present 00:24:27.860 Atomic Write Unit (Normal): 1 00:24:27.860 Atomic Write Unit (PFail): 1 00:24:27.860 Atomic Compare & Write Unit: 1 00:24:27.860 Fused Compare & Write: Supported 00:24:27.860 Scatter-Gather List 00:24:27.860 SGL Command Set: Supported 00:24:27.860 SGL Keyed: Supported 00:24:27.860 SGL Bit Bucket Descriptor: Not Supported 00:24:27.860 SGL Metadata Pointer: Not Supported 00:24:27.860 Oversized SGL: Not Supported 00:24:27.860 SGL Metadata Address: Not Supported 00:24:27.860 SGL Offset: Supported 00:24:27.860 Transport SGL Data Block: Not Supported 00:24:27.860 Replay Protected Memory Block: Not Supported 00:24:27.860 00:24:27.860 Firmware Slot Information 00:24:27.860 ========================= 00:24:27.860 Active slot: 0 00:24:27.860 00:24:27.860 00:24:27.860 Error Log 00:24:27.860 ========= 00:24:27.860 00:24:27.860 Active Namespaces 00:24:27.860 ================= 00:24:27.860 Discovery Log Page 00:24:27.860 ================== 00:24:27.860 Generation Counter: 2 00:24:27.860 Number of Records: 2 00:24:27.860 Record Format: 0 00:24:27.860 00:24:27.860 Discovery Log Entry 0 00:24:27.860 ---------------------- 00:24:27.860 Transport Type: 3 (TCP) 00:24:27.860 Address Family: 1 (IPv4) 00:24:27.860 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.860 Entry Flags: 00:24:27.860 Duplicate Returned Information: 1 00:24:27.860 Explicit Persistent Connection Support for Discovery: 1 00:24:27.860 Transport Requirements: 00:24:27.860 Secure Channel: Not Required 00:24:27.860 Port ID: 0 (0x0000) 00:24:27.860 Controller ID: 65535 (0xffff) 00:24:27.860 Admin Max SQ Size: 128 00:24:27.860 Transport Service Identifier: 4420 00:24:27.860 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.860 Transport Address: 10.0.0.2 00:24:27.860 Discovery Log Entry 1 00:24:27.860 ---------------------- 00:24:27.860 Transport Type: 3 (TCP) 00:24:27.860 Address Family: 1 (IPv4) 00:24:27.860 Subsystem Type: 2 (NVM Subsystem) 00:24:27.860 Entry Flags: 00:24:27.860 Duplicate Returned Information: 0 00:24:27.860 Explicit Persistent Connection Support for Discovery: 0 00:24:27.860 Transport Requirements: 00:24:27.860 Secure Channel: Not Required 00:24:27.860 Port ID: 0 (0x0000) 00:24:27.860 Controller ID: 65535 (0xffff) 00:24:27.860 Admin Max SQ Size: 128 00:24:27.860 Transport Service Identifier: 4420 00:24:27.860 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:27.860 Transport Address: 10.0.0.2 [2024-07-15 11:40:02.150393] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:27.860 [2024-07-15 11:40:02.150407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92de40) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.860 [2024-07-15 11:40:02.150421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92dfc0) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.860 [2024-07-15 11:40:02.150433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e140) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.860 [2024-07-15 11:40:02.150446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.860 [2024-07-15 11:40:02.150465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.860 [2024-07-15 11:40:02.150484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.860 [2024-07-15 11:40:02.150502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.860 [2024-07-15 11:40:02.150611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.860 [2024-07-15 11:40:02.150620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.860 [2024-07-15 11:40:02.150624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.860 [2024-07-15 11:40:02.150656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.860 [2024-07-15 11:40:02.150674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.860 [2024-07-15 11:40:02.150817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.860 [2024-07-15 11:40:02.150825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.860 [2024-07-15 11:40:02.150830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.150840] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:27.860 [2024-07-15 11:40:02.150849] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:27.860 [2024-07-15 11:40:02.150861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.150871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.860 [2024-07-15 11:40:02.150879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.860 [2024-07-15 11:40:02.150892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.860 [2024-07-15 11:40:02.150999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.860 [2024-07-15 11:40:02.151008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.860 [2024-07-15 11:40:02.151012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.151017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.860 [2024-07-15 11:40:02.151029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.151035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.860 [2024-07-15 11:40:02.151039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.860 [2024-07-15 11:40:02.151047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.860 [2024-07-15 11:40:02.151061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.860 [2024-07-15 11:40:02.151168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.860 [2024-07-15 11:40:02.151177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.151181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.151197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.151216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.151229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.151363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.151371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.151376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.151393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.151411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.151426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.151526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.151534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.151539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.151558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.151576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.151590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.151694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.151702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.151706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.151723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.151741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.151755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.151864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.151872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.151877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.151893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.151903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.151911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.151924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.152878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.152886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.152890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.152906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.152921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.152930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.152943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.153050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.153059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.153063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.153080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.153098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.153111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.861 [2024-07-15 11:40:02.153218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.861 [2024-07-15 11:40:02.153226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.861 [2024-07-15 11:40:02.153231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.861 [2024-07-15 11:40:02.153247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.861 [2024-07-15 11:40:02.153262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.861 [2024-07-15 11:40:02.153271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.861 [2024-07-15 11:40:02.153284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.153390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.153398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.153403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.153419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.153438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.153451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.153561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.153570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.153574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.153591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.153611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.153624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.153730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.153738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.153742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.153759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.153776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.153789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.153898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.153906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.153911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.153927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.153937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.153945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.153958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.154077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.154084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.154089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.154093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.154106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.154111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.154116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.154124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.154137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.154238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.154246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.154250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.158265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.158281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.158287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.158291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8aaec0) 00:24:27.862 [2024-07-15 11:40:02.158300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.158318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e2c0, cid 3, qid 0 00:24:27.862 [2024-07-15 11:40:02.158496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.158504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.158508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.158513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x92e2c0) on tqpair=0x8aaec0 00:24:27.862 [2024-07-15 11:40:02.158523] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:27.862 00:24:27.862 11:40:02 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:27.862 [2024-07-15 11:40:02.202685] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:27.862 [2024-07-15 11:40:02.202718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885799 ] 00:24:27.862 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.862 [2024-07-15 11:40:02.239500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:27.862 [2024-07-15 11:40:02.239553] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.862 [2024-07-15 11:40:02.239560] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.862 [2024-07-15 11:40:02.239573] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.862 [2024-07-15 11:40:02.239580] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.862 [2024-07-15 11:40:02.239819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:27.862 [2024-07-15 11:40:02.239848] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17abec0 0 00:24:27.862 [2024-07-15 11:40:02.246267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.862 [2024-07-15 11:40:02.246282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.862 [2024-07-15 11:40:02.246287] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.862 [2024-07-15 11:40:02.246291] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.862 [2024-07-15 11:40:02.246329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.246336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.246341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.862 [2024-07-15 11:40:02.246355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.862 [2024-07-15 11:40:02.246374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.862 [2024-07-15 11:40:02.254270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.254281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.254286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.862 [2024-07-15 11:40:02.254306] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.862 [2024-07-15 11:40:02.254314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:27.862 [2024-07-15 11:40:02.254324] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:27.862 [2024-07-15 11:40:02.254338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.862 [2024-07-15 11:40:02.254358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.254375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.862 [2024-07-15 11:40:02.254600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.254609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.254614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.862 [2024-07-15 11:40:02.254625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:27.862 [2024-07-15 11:40:02.254634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:27.862 [2024-07-15 11:40:02.254642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.862 [2024-07-15 11:40:02.254660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.862 [2024-07-15 11:40:02.254674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.862 [2024-07-15 11:40:02.254817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.862 [2024-07-15 11:40:02.254825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.862 [2024-07-15 11:40:02.254830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.862 [2024-07-15 11:40:02.254841] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:27.862 [2024-07-15 11:40:02.254851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.862 [2024-07-15 11:40:02.254859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.862 [2024-07-15 11:40:02.254864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.254869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.254878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.863 [2024-07-15 11:40:02.254891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.255034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.255042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.255046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.255058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.863 [2024-07-15 11:40:02.255070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.255091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.863 [2024-07-15 11:40:02.255105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.255245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.255259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.255264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.255275] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:27.863 [2024-07-15 11:40:02.255280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:27.863 [2024-07-15 11:40:02.255291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.863 [2024-07-15 11:40:02.255398] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:27.863 [2024-07-15 11:40:02.255403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.863 [2024-07-15 11:40:02.255412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.255430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.863 [2024-07-15 11:40:02.255444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.255607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.255615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.255619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.255629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.863 [2024-07-15 11:40:02.255642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.255660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.863 [2024-07-15 11:40:02.255673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.255813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.255821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.255825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.255836] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.863 [2024-07-15 11:40:02.255844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:27.863 [2024-07-15 11:40:02.255854] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:27.863 [2024-07-15 11:40:02.255864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.863 [2024-07-15 11:40:02.255875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.255880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.255889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.863 [2024-07-15 11:40:02.255903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.256138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.863 [2024-07-15 11:40:02.256146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.863 [2024-07-15 11:40:02.256151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256156] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=4096, cccid=0 00:24:27.863 [2024-07-15 11:40:02.256161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182ee40) on tqpair(0x17abec0): expected_datao=0, payload_size=4096 00:24:27.863 [2024-07-15 11:40:02.256166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256176] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256180] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.256210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.256215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.256228] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:27.863 [2024-07-15 11:40:02.256237] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:27.863 [2024-07-15 11:40:02.256243] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:27.863 [2024-07-15 11:40:02.256248] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:27.863 [2024-07-15 11:40:02.256259] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:27.863 [2024-07-15 11:40:02.256266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:27.863 [2024-07-15 11:40:02.256277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.863 [2024-07-15 11:40:02.256285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.256304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.863 [2024-07-15 11:40:02.256319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.863 [2024-07-15 11:40:02.256478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.863 [2024-07-15 11:40:02.256486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.863 [2024-07-15 11:40:02.256493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.863 [2024-07-15 11:40:02.256507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.256524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.863 [2024-07-15 11:40:02.256532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.863 [2024-07-15 11:40:02.256541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17abec0) 00:24:27.863 [2024-07-15 11:40:02.256548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.863 [2024-07-15 11:40:02.256556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.256573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.864 [2024-07-15 11:40:02.256580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.256596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.864 [2024-07-15 11:40:02.256602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.256616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.256624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.256637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.864 [2024-07-15 11:40:02.256652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182ee40, cid 0, qid 0 00:24:27.864 [2024-07-15 11:40:02.256659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182efc0, cid 1, qid 0 00:24:27.864 [2024-07-15 11:40:02.256665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f140, cid 2, qid 0 00:24:27.864 [2024-07-15 11:40:02.256671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.864 [2024-07-15 11:40:02.256677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.864 [2024-07-15 11:40:02.256969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.864 [2024-07-15 11:40:02.256978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.864 [2024-07-15 11:40:02.256982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.256987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.864 [2024-07-15 11:40:02.256993] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:27.864 [2024-07-15 11:40:02.256999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.257045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.864 [2024-07-15 11:40:02.257059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.864 [2024-07-15 11:40:02.257202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.864 [2024-07-15 11:40:02.257211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.864 [2024-07-15 11:40:02.257215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.864 [2024-07-15 11:40:02.257300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.257336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.864 [2024-07-15 11:40:02.257350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.864 [2024-07-15 11:40:02.257515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.864 [2024-07-15 11:40:02.257524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.864 [2024-07-15 11:40:02.257528] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257533] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=4096, cccid=4 00:24:27.864 [2024-07-15 11:40:02.257538] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f440) on tqpair(0x17abec0): expected_datao=0, payload_size=4096 00:24:27.864 [2024-07-15 11:40:02.257544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257552] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257557] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.864 [2024-07-15 11:40:02.257586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.864 [2024-07-15 11:40:02.257590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.864 [2024-07-15 11:40:02.257606] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:27.864 [2024-07-15 11:40:02.257622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.257658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.864 [2024-07-15 11:40:02.257673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.864 [2024-07-15 11:40:02.257850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.864 [2024-07-15 11:40:02.257858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.864 [2024-07-15 11:40:02.257863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257867] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=4096, cccid=4 00:24:27.864 [2024-07-15 11:40:02.257873] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f440) on tqpair(0x17abec0): expected_datao=0, payload_size=4096 00:24:27.864 [2024-07-15 11:40:02.257879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257887] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.864 [2024-07-15 11:40:02.257933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.864 [2024-07-15 11:40:02.257937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.864 [2024-07-15 11:40:02.257956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.257978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.257983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.257991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.864 [2024-07-15 11:40:02.258005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.864 [2024-07-15 11:40:02.258184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.864 [2024-07-15 11:40:02.258191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.864 [2024-07-15 11:40:02.258196] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.258200] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=4096, cccid=4 00:24:27.864 [2024-07-15 11:40:02.258206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f440) on tqpair(0x17abec0): expected_datao=0, payload_size=4096 00:24:27.864 [2024-07-15 11:40:02.258211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.258220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.258224] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.262259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.864 [2024-07-15 11:40:02.262270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.864 [2024-07-15 11:40:02.262274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.262279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.864 [2024-07-15 11:40:02.262289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262342] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:27.864 [2024-07-15 11:40:02.262348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:27.864 [2024-07-15 11:40:02.262354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:27.864 [2024-07-15 11:40:02.262371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.262376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.262385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.864 [2024-07-15 11:40:02.262394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.262398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.864 [2024-07-15 11:40:02.262403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17abec0) 00:24:27.864 [2024-07-15 11:40:02.262410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.865 [2024-07-15 11:40:02.262429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.865 [2024-07-15 11:40:02.262436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f5c0, cid 5, qid 0 00:24:27.865 [2024-07-15 11:40:02.262699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.262708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.262712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.262717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.262725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.262732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.262737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.262742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f5c0) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.262753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.262759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.262767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.262781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f5c0, cid 5, qid 0 00:24:27.865 [2024-07-15 11:40:02.262931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.262940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.262944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.262949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f5c0) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.262962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.262968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.262976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.262990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f5c0, cid 5, qid 0 00:24:27.865 [2024-07-15 11:40:02.263149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.263158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.263162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f5c0) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.263178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.263192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.263205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f5c0, cid 5, qid 0 00:24:27.865 [2024-07-15 11:40:02.263354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.263363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.263367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f5c0) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.263390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.263404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.263413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.263425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.263434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.263447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.263456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17abec0) 00:24:27.865 [2024-07-15 11:40:02.263468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.865 [2024-07-15 11:40:02.263484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f5c0, cid 5, qid 0 00:24:27.865 [2024-07-15 11:40:02.263491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f440, cid 4, qid 0 00:24:27.865 [2024-07-15 11:40:02.263496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f740, cid 6, qid 0 00:24:27.865 [2024-07-15 11:40:02.263502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f8c0, cid 7, qid 0 00:24:27.865 [2024-07-15 11:40:02.263906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.865 [2024-07-15 11:40:02.263916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.865 [2024-07-15 11:40:02.263921] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=8192, cccid=5 00:24:27.865 [2024-07-15 11:40:02.263931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f5c0) on tqpair(0x17abec0): expected_datao=0, payload_size=8192 00:24:27.865 [2024-07-15 11:40:02.263937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263972] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263978] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.263985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.865 [2024-07-15 11:40:02.263993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.865 [2024-07-15 11:40:02.263997] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=512, cccid=4 00:24:27.865 [2024-07-15 11:40:02.264007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f440) on tqpair(0x17abec0): expected_datao=0, payload_size=512 00:24:27.865 [2024-07-15 11:40:02.264013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264021] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264025] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.865 [2024-07-15 11:40:02.264039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.865 [2024-07-15 11:40:02.264044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264048] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=512, cccid=6 00:24:27.865 [2024-07-15 11:40:02.264054] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f740) on tqpair(0x17abec0): expected_datao=0, payload_size=512 00:24:27.865 [2024-07-15 11:40:02.264059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264067] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264072] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.865 [2024-07-15 11:40:02.264086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.865 [2024-07-15 11:40:02.264090] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264095] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17abec0): datao=0, datal=4096, cccid=7 00:24:27.865 [2024-07-15 11:40:02.264101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182f8c0) on tqpair(0x17abec0): expected_datao=0, payload_size=4096 00:24:27.865 [2024-07-15 11:40:02.264106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264114] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264119] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.264136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.264140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f5c0) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.264160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.264168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.264172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f440) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.264190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.264198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.264203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f740) on tqpair=0x17abec0 00:24:27.865 [2024-07-15 11:40:02.264216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.865 [2024-07-15 11:40:02.264223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.865 [2024-07-15 11:40:02.264228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.865 [2024-07-15 11:40:02.264233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f8c0) on tqpair=0x17abec0 00:24:27.865 ===================================================== 00:24:27.865 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.865 ===================================================== 00:24:27.865 Controller Capabilities/Features 00:24:27.865 ================================ 00:24:27.865 Vendor ID: 8086 00:24:27.865 Subsystem Vendor ID: 8086 00:24:27.865 Serial Number: SPDK00000000000001 00:24:27.865 Model Number: SPDK bdev Controller 00:24:27.865 Firmware Version: 24.09 00:24:27.865 Recommended Arb Burst: 6 00:24:27.865 IEEE OUI Identifier: e4 d2 5c 00:24:27.865 Multi-path I/O 00:24:27.865 May have multiple subsystem ports: Yes 00:24:27.865 May have multiple controllers: Yes 00:24:27.865 Associated with SR-IOV VF: No 00:24:27.865 Max Data Transfer Size: 131072 00:24:27.865 Max Number of Namespaces: 32 00:24:27.865 Max Number of I/O Queues: 127 00:24:27.865 NVMe Specification Version (VS): 1.3 00:24:27.865 NVMe Specification Version (Identify): 1.3 00:24:27.865 Maximum Queue Entries: 128 00:24:27.865 Contiguous Queues Required: Yes 00:24:27.865 Arbitration Mechanisms Supported 00:24:27.865 Weighted Round Robin: Not Supported 00:24:27.865 Vendor Specific: Not Supported 00:24:27.865 Reset Timeout: 15000 ms 00:24:27.866 Doorbell Stride: 4 bytes 00:24:27.866 NVM Subsystem Reset: Not Supported 00:24:27.866 Command Sets Supported 00:24:27.866 NVM Command Set: Supported 00:24:27.866 Boot Partition: Not Supported 00:24:27.866 Memory Page Size Minimum: 4096 bytes 00:24:27.866 Memory Page Size Maximum: 4096 bytes 00:24:27.866 Persistent Memory Region: Not Supported 00:24:27.866 Optional Asynchronous Events Supported 00:24:27.866 Namespace Attribute Notices: Supported 00:24:27.866 Firmware Activation Notices: Not Supported 00:24:27.866 ANA Change Notices: Not Supported 00:24:27.866 PLE Aggregate Log Change Notices: Not Supported 00:24:27.866 LBA Status Info Alert Notices: Not Supported 00:24:27.866 EGE Aggregate Log Change Notices: Not Supported 00:24:27.866 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.866 Zone Descriptor Change Notices: Not Supported 00:24:27.866 Discovery Log Change Notices: Not Supported 00:24:27.866 Controller Attributes 00:24:27.866 128-bit Host Identifier: Supported 00:24:27.866 Non-Operational Permissive Mode: Not Supported 00:24:27.866 NVM Sets: Not Supported 00:24:27.866 Read Recovery Levels: Not Supported 00:24:27.866 Endurance Groups: Not Supported 00:24:27.866 Predictable Latency Mode: Not Supported 00:24:27.866 Traffic Based Keep ALive: Not Supported 00:24:27.866 Namespace Granularity: Not Supported 00:24:27.866 SQ Associations: Not Supported 00:24:27.866 UUID List: Not Supported 00:24:27.866 Multi-Domain Subsystem: Not Supported 00:24:27.866 Fixed Capacity Management: Not Supported 00:24:27.866 Variable Capacity Management: Not Supported 00:24:27.866 Delete Endurance Group: Not Supported 00:24:27.866 Delete NVM Set: Not Supported 00:24:27.866 Extended LBA Formats Supported: Not Supported 00:24:27.866 Flexible Data Placement Supported: Not Supported 00:24:27.866 00:24:27.866 Controller Memory Buffer Support 00:24:27.866 ================================ 00:24:27.866 Supported: No 00:24:27.866 00:24:27.866 Persistent Memory Region Support 00:24:27.866 ================================ 00:24:27.866 Supported: No 00:24:27.866 00:24:27.866 Admin Command Set Attributes 00:24:27.866 ============================ 00:24:27.866 Security Send/Receive: Not Supported 00:24:27.866 Format NVM: Not Supported 00:24:27.866 Firmware Activate/Download: Not Supported 00:24:27.866 Namespace Management: Not Supported 00:24:27.866 Device Self-Test: Not Supported 00:24:27.866 Directives: Not Supported 00:24:27.866 NVMe-MI: Not Supported 00:24:27.866 Virtualization Management: Not Supported 00:24:27.866 Doorbell Buffer Config: Not Supported 00:24:27.866 Get LBA Status Capability: Not Supported 00:24:27.866 Command & Feature Lockdown Capability: Not Supported 00:24:27.866 Abort Command Limit: 4 00:24:27.866 Async Event Request Limit: 4 00:24:27.866 Number of Firmware Slots: N/A 00:24:27.866 Firmware Slot 1 Read-Only: N/A 00:24:27.866 Firmware Activation Without Reset: N/A 00:24:27.866 Multiple Update Detection Support: N/A 00:24:27.866 Firmware Update Granularity: No Information Provided 00:24:27.866 Per-Namespace SMART Log: No 00:24:27.866 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.866 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:27.866 Command Effects Log Page: Supported 00:24:27.866 Get Log Page Extended Data: Supported 00:24:27.866 Telemetry Log Pages: Not Supported 00:24:27.866 Persistent Event Log Pages: Not Supported 00:24:27.866 Supported Log Pages Log Page: May Support 00:24:27.866 Commands Supported & Effects Log Page: Not Supported 00:24:27.866 Feature Identifiers & Effects Log Page:May Support 00:24:27.866 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.866 Data Area 4 for Telemetry Log: Not Supported 00:24:27.866 Error Log Page Entries Supported: 128 00:24:27.866 Keep Alive: Supported 00:24:27.866 Keep Alive Granularity: 10000 ms 00:24:27.866 00:24:27.866 NVM Command Set Attributes 00:24:27.866 ========================== 00:24:27.866 Submission Queue Entry Size 00:24:27.866 Max: 64 00:24:27.866 Min: 64 00:24:27.866 Completion Queue Entry Size 00:24:27.866 Max: 16 00:24:27.866 Min: 16 00:24:27.866 Number of Namespaces: 32 00:24:27.866 Compare Command: Supported 00:24:27.866 Write Uncorrectable Command: Not Supported 00:24:27.866 Dataset Management Command: Supported 00:24:27.866 Write Zeroes Command: Supported 00:24:27.866 Set Features Save Field: Not Supported 00:24:27.866 Reservations: Supported 00:24:27.866 Timestamp: Not Supported 00:24:27.866 Copy: Supported 00:24:27.866 Volatile Write Cache: Present 00:24:27.866 Atomic Write Unit (Normal): 1 00:24:27.866 Atomic Write Unit (PFail): 1 00:24:27.866 Atomic Compare & Write Unit: 1 00:24:27.866 Fused Compare & Write: Supported 00:24:27.866 Scatter-Gather List 00:24:27.866 SGL Command Set: Supported 00:24:27.866 SGL Keyed: Supported 00:24:27.866 SGL Bit Bucket Descriptor: Not Supported 00:24:27.866 SGL Metadata Pointer: Not Supported 00:24:27.866 Oversized SGL: Not Supported 00:24:27.866 SGL Metadata Address: Not Supported 00:24:27.866 SGL Offset: Supported 00:24:27.866 Transport SGL Data Block: Not Supported 00:24:27.866 Replay Protected Memory Block: Not Supported 00:24:27.866 00:24:27.866 Firmware Slot Information 00:24:27.866 ========================= 00:24:27.866 Active slot: 1 00:24:27.866 Slot 1 Firmware Revision: 24.09 00:24:27.866 00:24:27.866 00:24:27.866 Commands Supported and Effects 00:24:27.866 ============================== 00:24:27.866 Admin Commands 00:24:27.866 -------------- 00:24:27.866 Get Log Page (02h): Supported 00:24:27.866 Identify (06h): Supported 00:24:27.866 Abort (08h): Supported 00:24:27.866 Set Features (09h): Supported 00:24:27.866 Get Features (0Ah): Supported 00:24:27.866 Asynchronous Event Request (0Ch): Supported 00:24:27.866 Keep Alive (18h): Supported 00:24:27.866 I/O Commands 00:24:27.866 ------------ 00:24:27.866 Flush (00h): Supported LBA-Change 00:24:27.866 Write (01h): Supported LBA-Change 00:24:27.866 Read (02h): Supported 00:24:27.866 Compare (05h): Supported 00:24:27.866 Write Zeroes (08h): Supported LBA-Change 00:24:27.866 Dataset Management (09h): Supported LBA-Change 00:24:27.866 Copy (19h): Supported LBA-Change 00:24:27.866 00:24:27.866 Error Log 00:24:27.866 ========= 00:24:27.866 00:24:27.866 Arbitration 00:24:27.866 =========== 00:24:27.866 Arbitration Burst: 1 00:24:27.866 00:24:27.866 Power Management 00:24:27.866 ================ 00:24:27.866 Number of Power States: 1 00:24:27.866 Current Power State: Power State #0 00:24:27.866 Power State #0: 00:24:27.866 Max Power: 0.00 W 00:24:27.866 Non-Operational State: Operational 00:24:27.866 Entry Latency: Not Reported 00:24:27.866 Exit Latency: Not Reported 00:24:27.866 Relative Read Throughput: 0 00:24:27.866 Relative Read Latency: 0 00:24:27.866 Relative Write Throughput: 0 00:24:27.866 Relative Write Latency: 0 00:24:27.866 Idle Power: Not Reported 00:24:27.866 Active Power: Not Reported 00:24:27.866 Non-Operational Permissive Mode: Not Supported 00:24:27.866 00:24:27.866 Health Information 00:24:27.866 ================== 00:24:27.866 Critical Warnings: 00:24:27.866 Available Spare Space: OK 00:24:27.866 Temperature: OK 00:24:27.866 Device Reliability: OK 00:24:27.866 Read Only: No 00:24:27.866 Volatile Memory Backup: OK 00:24:27.866 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:27.866 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:27.866 Available Spare: 0% 00:24:27.866 Available Spare Threshold: 0% 00:24:27.866 Life Percentage Used:[2024-07-15 11:40:02.264357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.866 [2024-07-15 11:40:02.264363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17abec0) 00:24:27.866 [2024-07-15 11:40:02.264372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.866 [2024-07-15 11:40:02.264388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f8c0, cid 7, qid 0 00:24:27.866 [2024-07-15 11:40:02.264575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.866 [2024-07-15 11:40:02.264583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.866 [2024-07-15 11:40:02.264588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.866 [2024-07-15 11:40:02.264593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f8c0) on tqpair=0x17abec0 00:24:27.866 [2024-07-15 11:40:02.264631] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:27.866 [2024-07-15 11:40:02.264643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182ee40) on tqpair=0x17abec0 00:24:27.866 [2024-07-15 11:40:02.264651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.866 [2024-07-15 11:40:02.264657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182efc0) on tqpair=0x17abec0 00:24:27.866 [2024-07-15 11:40:02.264663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.866 [2024-07-15 11:40:02.264670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f140) on tqpair=0x17abec0 00:24:27.866 [2024-07-15 11:40:02.264676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.866 [2024-07-15 11:40:02.264682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.866 [2024-07-15 11:40:02.264688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.866 [2024-07-15 11:40:02.264697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.866 [2024-07-15 11:40:02.264702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.866 [2024-07-15 11:40:02.264707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.264716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.264731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.264864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.264873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.264877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.264882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.264892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.264898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.264902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.264911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.264928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.265095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.265103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.265108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.265118] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:27.867 [2024-07-15 11:40:02.265124] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:27.867 [2024-07-15 11:40:02.265136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.265154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.265168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.265333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.265341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.265346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.265364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.265381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.265395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.265533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.265541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.265546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.265562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.265580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.265593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.265735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.265743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.265750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.265767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.265785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.265798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.265946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.265954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.265959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.265975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.265985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.265993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.266006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.266150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.266158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.266163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.266168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.266180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.266185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.266189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.266198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.266211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.270266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.270277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.270282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.270287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.270299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.270304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.270309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17abec0) 00:24:27.867 [2024-07-15 11:40:02.270318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.867 [2024-07-15 11:40:02.270333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182f2c0, cid 3, qid 0 00:24:27.867 [2024-07-15 11:40:02.270481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.867 [2024-07-15 11:40:02.270489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.867 [2024-07-15 11:40:02.270493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.867 [2024-07-15 11:40:02.270501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x182f2c0) on tqpair=0x17abec0 00:24:27.867 [2024-07-15 11:40:02.270510] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:27.867 0% 00:24:27.867 Data Units Read: 0 00:24:27.867 Data Units Written: 0 00:24:27.867 Host Read Commands: 0 00:24:27.867 Host Write Commands: 0 00:24:27.867 Controller Busy Time: 0 minutes 00:24:27.867 Power Cycles: 0 00:24:27.867 Power On Hours: 0 hours 00:24:27.867 Unsafe Shutdowns: 0 00:24:27.867 Unrecoverable Media Errors: 0 00:24:27.867 Lifetime Error Log Entries: 0 00:24:27.867 Warning Temperature Time: 0 minutes 00:24:27.867 Critical Temperature Time: 0 minutes 00:24:27.867 00:24:27.867 Number of Queues 00:24:27.867 ================ 00:24:27.867 Number of I/O Submission Queues: 127 00:24:27.867 Number of I/O Completion Queues: 127 00:24:27.867 00:24:27.867 Active Namespaces 00:24:27.867 ================= 00:24:27.867 Namespace ID:1 00:24:27.867 Error Recovery Timeout: Unlimited 00:24:27.867 Command Set Identifier: NVM (00h) 00:24:27.867 Deallocate: Supported 00:24:27.867 Deallocated/Unwritten Error: Not Supported 00:24:27.867 Deallocated Read Value: Unknown 00:24:27.867 Deallocate in Write Zeroes: Not Supported 00:24:27.867 Deallocated Guard Field: 0xFFFF 00:24:27.867 Flush: Supported 00:24:27.867 Reservation: Supported 00:24:27.867 Namespace Sharing Capabilities: Multiple Controllers 00:24:27.868 Size (in LBAs): 131072 (0GiB) 00:24:27.868 Capacity (in LBAs): 131072 (0GiB) 00:24:27.868 Utilization (in LBAs): 131072 (0GiB) 00:24:27.868 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:27.868 EUI64: ABCDEF0123456789 00:24:27.868 UUID: d4d82978-c3a6-4c1b-b187-181a8f4dd467 00:24:27.868 Thin Provisioning: Not Supported 00:24:27.868 Per-NS Atomic Units: Yes 00:24:27.868 Atomic Boundary Size (Normal): 0 00:24:27.868 Atomic Boundary Size (PFail): 0 00:24:27.868 Atomic Boundary Offset: 0 00:24:27.868 Maximum Single Source Range Length: 65535 00:24:27.868 Maximum Copy Length: 65535 00:24:27.868 Maximum Source Range Count: 1 00:24:27.868 NGUID/EUI64 Never Reused: No 00:24:27.868 Namespace Write Protected: No 00:24:27.868 Number of LBA Formats: 1 00:24:27.868 Current LBA Format: LBA Format #00 00:24:27.868 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:27.868 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.868 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.868 rmmod nvme_tcp 00:24:28.127 rmmod nvme_fabrics 00:24:28.127 rmmod nvme_keyring 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2885520 ']' 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2885520 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2885520 ']' 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2885520 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885520 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885520' 00:24:28.127 killing process with pid 2885520 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2885520 00:24:28.127 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2885520 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.387 11:40:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.292 11:40:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.292 00:24:30.292 real 0m9.807s 00:24:30.292 user 0m8.117s 00:24:30.292 sys 0m4.772s 00:24:30.292 11:40:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.292 11:40:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.292 ************************************ 00:24:30.292 END TEST nvmf_identify 00:24:30.292 ************************************ 00:24:30.292 11:40:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.292 11:40:04 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.292 11:40:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.292 11:40:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.292 11:40:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.551 ************************************ 00:24:30.551 START TEST nvmf_perf 00:24:30.551 ************************************ 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.551 * Looking for test storage... 00:24:30.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.551 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.552 11:40:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:37.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:37.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:37.119 Found net devices under 0000:af:00.0: cvl_0_0 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:37.119 Found net devices under 0000:af:00.1: cvl_0_1 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.119 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:24:37.120 00:24:37.120 --- 10.0.0.2 ping statistics --- 00:24:37.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.120 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:24:37.120 00:24:37.120 --- 10.0.0.1 ping statistics --- 00:24:37.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.120 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2889435 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2889435 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2889435 ']' 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.120 11:40:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.120 [2024-07-15 11:40:10.989453] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:24:37.120 [2024-07-15 11:40:10.989515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.120 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.120 [2024-07-15 11:40:11.078201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.120 [2024-07-15 11:40:11.169279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.120 [2024-07-15 11:40:11.169322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.120 [2024-07-15 11:40:11.169332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.120 [2024-07-15 11:40:11.169340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.120 [2024-07-15 11:40:11.169353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.120 [2024-07-15 11:40:11.169408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.120 [2024-07-15 11:40:11.169545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.120 [2024-07-15 11:40:11.169579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.120 [2024-07-15 11:40:11.169578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:37.688 11:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:40.970 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:40.970 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:40.970 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:24:40.970 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:41.228 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:41.228 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:24:41.228 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:41.228 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:41.229 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.487 [2024-07-15 11:40:15.860096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.487 11:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.746 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:41.746 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.005 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.005 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:42.264 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.524 [2024-07-15 11:40:16.805226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.524 11:40:16 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:42.782 11:40:17 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:24:42.782 11:40:17 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:42.782 11:40:17 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:42.782 11:40:17 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:44.159 Initializing NVMe Controllers 00:24:44.159 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:24:44.159 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:24:44.159 Initialization complete. Launching workers. 00:24:44.159 ======================================================== 00:24:44.159 Latency(us) 00:24:44.159 Device Information : IOPS MiB/s Average min max 00:24:44.159 PCIE (0000:86:00.0) NSID 1 from core 0: 69475.71 271.39 459.78 40.73 4349.55 00:24:44.159 ======================================================== 00:24:44.159 Total : 69475.71 271.39 459.78 40.73 4349.55 00:24:44.159 00:24:44.159 11:40:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.159 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.535 Initializing NVMe Controllers 00:24:45.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.535 Initialization complete. Launching workers. 00:24:45.535 ======================================================== 00:24:45.535 Latency(us) 00:24:45.535 Device Information : IOPS MiB/s Average min max 00:24:45.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.00 0.30 13010.43 235.70 45603.68 00:24:45.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 89.00 0.35 11588.88 7950.21 47884.51 00:24:45.535 ======================================================== 00:24:45.535 Total : 166.00 0.65 12248.28 235.70 47884.51 00:24:45.535 00:24:45.535 11:40:19 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.535 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.912 Initializing NVMe Controllers 00:24:46.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:46.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:46.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:46.912 Initialization complete. Launching workers. 00:24:46.912 ======================================================== 00:24:46.912 Latency(us) 00:24:46.912 Device Information : IOPS MiB/s Average min max 00:24:46.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4379.99 17.11 7334.07 1002.58 12982.37 00:24:46.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3843.99 15.02 8361.32 5153.51 16224.76 00:24:46.912 ======================================================== 00:24:46.912 Total : 8223.98 32.12 7814.22 1002.58 16224.76 00:24:46.912 00:24:46.912 11:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:46.912 11:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:46.912 11:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.912 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.447 Initializing NVMe Controllers 00:24:49.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.447 Controller IO queue size 128, less than required. 00:24:49.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.447 Controller IO queue size 128, less than required. 00:24:49.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.447 Initialization complete. Launching workers. 00:24:49.447 ======================================================== 00:24:49.447 Latency(us) 00:24:49.447 Device Information : IOPS MiB/s Average min max 00:24:49.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1233.89 308.47 107315.17 63178.23 166704.69 00:24:49.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.45 139.36 235421.22 49240.97 333933.57 00:24:49.447 ======================================================== 00:24:49.447 Total : 1791.34 447.83 147180.71 49240.97 333933.57 00:24:49.447 00:24:49.447 11:40:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:49.447 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.447 No valid NVMe controllers or AIO or URING devices found 00:24:49.447 Initializing NVMe Controllers 00:24:49.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.447 Controller IO queue size 128, less than required. 00:24:49.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:49.447 Controller IO queue size 128, less than required. 00:24:49.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.447 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:49.447 WARNING: Some requested NVMe devices were skipped 00:24:49.447 11:40:23 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:49.447 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.980 Initializing NVMe Controllers 00:24:51.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.980 Controller IO queue size 128, less than required. 00:24:51.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.980 Controller IO queue size 128, less than required. 00:24:51.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:51.980 Initialization complete. Launching workers. 00:24:51.980 00:24:51.980 ==================== 00:24:51.980 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:51.980 TCP transport: 00:24:51.980 polls: 14831 00:24:51.980 idle_polls: 8742 00:24:51.980 sock_completions: 6089 00:24:51.980 nvme_completions: 5595 00:24:51.980 submitted_requests: 8368 00:24:51.980 queued_requests: 1 00:24:51.980 00:24:51.980 ==================== 00:24:51.980 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:51.980 TCP transport: 00:24:51.980 polls: 17844 00:24:51.980 idle_polls: 14340 00:24:51.980 sock_completions: 3504 00:24:51.980 nvme_completions: 4527 00:24:51.980 submitted_requests: 6802 00:24:51.980 queued_requests: 1 00:24:51.980 ======================================================== 00:24:51.980 Latency(us) 00:24:51.980 Device Information : IOPS MiB/s Average min max 00:24:51.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1396.22 349.06 93145.42 55845.83 145750.90 00:24:51.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1129.66 282.41 115281.55 33594.19 175900.90 00:24:51.980 ======================================================== 00:24:51.980 Total : 2525.88 631.47 103045.43 33594.19 175900.90 00:24:51.980 00:24:51.980 11:40:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:51.980 11:40:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.239 rmmod nvme_tcp 00:24:52.239 rmmod nvme_fabrics 00:24:52.239 rmmod nvme_keyring 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2889435 ']' 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2889435 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2889435 ']' 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2889435 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:52.239 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2889435 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2889435' 00:24:52.498 killing process with pid 2889435 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2889435 00:24:52.498 11:40:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2889435 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.875 11:40:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.406 11:40:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.406 00:24:56.406 real 0m25.594s 00:24:56.406 user 1m9.288s 00:24:56.406 sys 0m7.714s 00:24:56.406 11:40:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.406 11:40:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:56.406 ************************************ 00:24:56.406 END TEST nvmf_perf 00:24:56.406 ************************************ 00:24:56.406 11:40:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.406 11:40:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:56.406 11:40:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.406 11:40:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.406 11:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.406 ************************************ 00:24:56.406 START TEST nvmf_fio_host 00:24:56.406 ************************************ 00:24:56.406 11:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:56.406 * Looking for test storage... 00:24:56.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.406 11:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.407 11:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:01.734 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:01.734 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:01.734 Found net devices under 0000:af:00.0: cvl_0_0 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:01.734 Found net devices under 0000:af:00.1: cvl_0_1 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.734 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:01.994 00:25:01.994 --- 10.0.0.2 ping statistics --- 00:25:01.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.994 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:01.994 00:25:01.994 --- 10.0.0.1 ping statistics --- 00:25:01.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.994 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2895911 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2895911 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2895911 ']' 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.994 11:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.994 [2024-07-15 11:40:36.410577] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:01.994 [2024-07-15 11:40:36.410651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.994 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.256 [2024-07-15 11:40:36.497787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.256 [2024-07-15 11:40:36.589434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.256 [2024-07-15 11:40:36.589477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.256 [2024-07-15 11:40:36.589488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.256 [2024-07-15 11:40:36.589497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.256 [2024-07-15 11:40:36.589504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.256 [2024-07-15 11:40:36.589557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.256 [2024-07-15 11:40:36.589670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.256 [2024-07-15 11:40:36.589782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.256 [2024-07-15 11:40:36.589782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.935 11:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.935 11:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:02.935 11:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.194 [2024-07-15 11:40:37.515667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.194 11:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:03.194 11:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.194 11:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.194 11:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:03.452 Malloc1 00:25:03.452 11:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.711 11:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:03.970 11:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.229 [2024-07-15 11:40:38.607650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.229 11:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:04.488 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:04.764 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:04.764 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:04.764 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:04.764 11:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:05.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:05.023 fio-3.35 00:25:05.023 Starting 1 thread 00:25:05.023 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.554 00:25:07.554 test: (groupid=0, jobs=1): err= 0: pid=2896599: Mon Jul 15 11:40:41 2024 00:25:07.554 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(29.4MiB/2016msec) 00:25:07.554 slat (usec): min=2, max=244, avg= 2.65, stdev= 4.01 00:25:07.554 clat (usec): min=5097, max=33675, avg=18487.79, stdev=1828.38 00:25:07.554 lat (usec): min=5131, max=33677, avg=18490.44, stdev=1827.89 00:25:07.554 clat percentiles (usec): 00:25:07.554 | 1.00th=[14746], 5.00th=[15926], 10.00th=[16450], 20.00th=[17171], 00:25:07.554 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:25:07.554 | 70.00th=[19268], 80.00th=[20055], 90.00th=[20579], 95.00th=[21365], 00:25:07.554 | 99.00th=[22414], 99.50th=[22938], 99.90th=[30278], 99.95th=[30278], 00:25:07.554 | 99.99th=[33817] 00:25:07.554 bw ( KiB/s): min=14368, max=15400, per=99.96%, avg=14950.00, stdev=434.14, samples=4 00:25:07.554 iops : min= 3592, max= 3850, avg=3737.50, stdev=108.53, samples=4 00:25:07.554 write: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(29.6MiB/2016msec); 0 zone resets 00:25:07.554 slat (usec): min=2, max=235, avg= 2.75, stdev= 2.92 00:25:07.554 clat (usec): min=2467, max=29911, avg=15536.18, stdev=1569.03 00:25:07.554 lat (usec): min=2483, max=29914, avg=15538.93, stdev=1568.63 00:25:07.554 clat percentiles (usec): 00:25:07.554 | 1.00th=[12387], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:25:07.554 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:25:07.554 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:25:07.554 | 99.00th=[18744], 99.50th=[21365], 99.90th=[26608], 99.95th=[28967], 00:25:07.554 | 99.99th=[30016] 00:25:07.554 bw ( KiB/s): min=14656, max=15296, per=99.95%, avg=15044.00, stdev=272.90, samples=4 00:25:07.554 iops : min= 3664, max= 3824, avg=3761.00, stdev=68.23, samples=4 00:25:07.554 lat (msec) : 4=0.07%, 10=0.27%, 20=89.67%, 50=9.99% 00:25:07.554 cpu : usr=73.40%, sys=25.51%, ctx=62, majf=0, minf=6 00:25:07.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:07.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:07.554 issued rwts: total=7538,7586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:07.554 00:25:07.554 Run status group 0 (all jobs): 00:25:07.554 READ: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=29.4MiB (30.9MB), run=2016-2016msec 00:25:07.554 WRITE: bw=14.7MiB/s (15.4MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=29.6MiB (31.1MB), run=2016-2016msec 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:07.554 11:40:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:07.812 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:07.812 fio-3.35 00:25:07.812 Starting 1 thread 00:25:07.812 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.358 00:25:10.358 test: (groupid=0, jobs=1): err= 0: pid=2897253: Mon Jul 15 11:40:44 2024 00:25:10.358 read: IOPS=4609, BW=72.0MiB/s (75.5MB/s)(145MiB/2010msec) 00:25:10.358 slat (usec): min=3, max=126, avg= 4.21, stdev= 1.49 00:25:10.358 clat (usec): min=4533, max=35022, avg=15629.20, stdev=5502.35 00:25:10.358 lat (usec): min=4537, max=35026, avg=15633.42, stdev=5502.37 00:25:10.358 clat percentiles (usec): 00:25:10.358 | 1.00th=[ 5604], 5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[ 9634], 00:25:10.358 | 30.00th=[12125], 40.00th=[14746], 50.00th=[16057], 60.00th=[17171], 00:25:10.358 | 70.00th=[18744], 80.00th=[19792], 90.00th=[22938], 95.00th=[24773], 00:25:10.358 | 99.00th=[28443], 99.50th=[29754], 99.90th=[31065], 99.95th=[33162], 00:25:10.358 | 99.99th=[34866] 00:25:10.358 bw ( KiB/s): min=29216, max=56864, per=52.41%, avg=38656.00, stdev=12528.81, samples=4 00:25:10.358 iops : min= 1826, max= 3554, avg=2416.00, stdev=783.05, samples=4 00:25:10.358 write: IOPS=2792, BW=43.6MiB/s (45.8MB/s)(79.5MiB/1821msec); 0 zone resets 00:25:10.358 slat (usec): min=45, max=261, avg=47.04, stdev= 5.23 00:25:10.358 clat (usec): min=7873, max=42577, avg=21312.54, stdev=7459.91 00:25:10.358 lat (usec): min=7919, max=42622, avg=21359.58, stdev=7459.70 00:25:10.358 clat percentiles (usec): 00:25:10.358 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11338], 20.00th=[12518], 00:25:10.358 | 30.00th=[14091], 40.00th=[19530], 50.00th=[23725], 60.00th=[25560], 00:25:10.358 | 70.00th=[26608], 80.00th=[28181], 90.00th=[30016], 95.00th=[31589], 00:25:10.358 | 99.00th=[33817], 99.50th=[38536], 99.90th=[41681], 99.95th=[42206], 00:25:10.358 | 99.99th=[42730] 00:25:10.358 bw ( KiB/s): min=31328, max=59104, per=90.08%, avg=40256.00, stdev=12763.01, samples=4 00:25:10.358 iops : min= 1958, max= 3694, avg=2516.00, stdev=797.69, samples=4 00:25:10.358 lat (msec) : 10=15.28%, 20=51.13%, 50=33.59% 00:25:10.358 cpu : usr=80.64%, sys=18.37%, ctx=40, majf=0, minf=3 00:25:10.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:10.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.358 issued rwts: total=9265,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.358 00:25:10.358 Run status group 0 (all jobs): 00:25:10.358 READ: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=145MiB (152MB), run=2010-2010msec 00:25:10.358 WRITE: bw=43.6MiB/s (45.8MB/s), 43.6MiB/s-43.6MiB/s (45.8MB/s-45.8MB/s), io=79.5MiB (83.3MB), run=1821-1821msec 00:25:10.358 11:40:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.358 11:40:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:10.358 11:40:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:10.358 11:40:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.359 rmmod nvme_tcp 00:25:10.359 rmmod nvme_fabrics 00:25:10.359 rmmod nvme_keyring 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2895911 ']' 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2895911 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2895911 ']' 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2895911 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.359 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2895911 00:25:10.617 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:10.617 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:10.617 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2895911' 00:25:10.617 killing process with pid 2895911 00:25:10.617 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2895911 00:25:10.617 11:40:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2895911 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.617 11:40:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.152 11:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.152 00:25:13.152 real 0m16.685s 00:25:13.152 user 1m2.482s 00:25:13.152 sys 0m6.584s 00:25:13.152 11:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.152 11:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.152 ************************************ 00:25:13.152 END TEST nvmf_fio_host 00:25:13.152 ************************************ 00:25:13.152 11:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.152 11:40:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:13.152 11:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.152 11:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.152 11:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.152 ************************************ 00:25:13.152 START TEST nvmf_failover 00:25:13.152 ************************************ 00:25:13.152 11:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:13.152 * Looking for test storage... 00:25:13.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.152 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.152 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:13.152 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:13.153 11:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:18.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:18.433 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:18.433 Found net devices under 0000:af:00.0: cvl_0_0 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:18.433 Found net devices under 0000:af:00.1: cvl_0_1 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.433 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.692 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.692 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.692 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.692 11:40:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:25:18.692 00:25:18.692 --- 10.0.0.2 ping statistics --- 00:25:18.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.692 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:25:18.692 00:25:18.692 --- 10.0.0.1 ping statistics --- 00:25:18.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.692 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2901226 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2901226 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2901226 ']' 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.692 11:40:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.950 [2024-07-15 11:40:53.187493] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:18.950 [2024-07-15 11:40:53.187548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.950 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.950 [2024-07-15 11:40:53.272733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.950 [2024-07-15 11:40:53.382229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.950 [2024-07-15 11:40:53.382280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.950 [2024-07-15 11:40:53.382292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.950 [2024-07-15 11:40:53.382303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.950 [2024-07-15 11:40:53.382313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.950 [2024-07-15 11:40:53.382377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.950 [2024-07-15 11:40:53.382490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.950 [2024-07-15 11:40:53.382492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.885 11:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:20.143 [2024-07-15 11:40:54.404822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.143 11:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:20.401 Malloc0 00:25:20.401 11:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.658 11:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.916 11:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.174 [2024-07-15 11:40:55.475870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.174 11:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.433 [2024-07-15 11:40:55.732911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.433 11:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:21.691 [2024-07-15 11:40:55.981938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2901776 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2901776 /var/tmp/bdevperf.sock 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2901776 ']' 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.691 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.949 11:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:21.949 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.513 NVMe0n1 00:25:22.513 11:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.770 00:25:22.770 11:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2902036 00:25:22.770 11:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.770 11:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:23.702 11:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.959 11:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:27.238 11:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.238 00:25:27.238 11:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:27.496 11:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:30.783 11:41:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.783 [2024-07-15 11:41:05.153451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.783 11:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:32.158 11:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.158 [2024-07-15 11:41:06.435265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.158 [2024-07-15 11:41:06.435573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 [2024-07-15 11:41:06.435874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a110 is same with the state(5) to be set 00:25:32.159 11:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2902036 00:25:38.728 0 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2901776 ']' 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901776' 00:25:38.728 killing process with pid 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2901776 00:25:38.728 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.728 [2024-07-15 11:40:56.062825] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:38.728 [2024-07-15 11:40:56.062892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901776 ] 00:25:38.728 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.728 [2024-07-15 11:40:56.144594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.728 [2024-07-15 11:40:56.233690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.728 Running I/O for 15 seconds... 00:25:38.728 [2024-07-15 11:40:58.297880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.728 [2024-07-15 11:40:58.297927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.297947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.297958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.297971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.297981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.728 [2024-07-15 11:40:58.298271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.728 [2024-07-15 11:40:58.298282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.298981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.298990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.299002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.299011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.299023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.299033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.299044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.299053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.729 [2024-07-15 11:40:58.299065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.729 [2024-07-15 11:40:58.299074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.730 [2024-07-15 11:40:58.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.730 [2024-07-15 11:40:58.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.730 [2024-07-15 11:40:58.299987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.730 [2024-07-15 11:40:58.299997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.731 [2024-07-15 11:40:58.300022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.731 [2024-07-15 11:40:58.300043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.731 [2024-07-15 11:40:58.300138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.731 [2024-07-15 11:40:58.300158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.731 [2024-07-15 11:40:58.300177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.731 [2024-07-15 11:40:58.300196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01a30 is same with the state(5) to be set 00:25:38.731 [2024-07-15 11:40:58.300468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34832 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34840 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.300969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.300977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.300986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.300995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.301002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.301010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.301021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.301031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.301039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.301047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:25:38.731 [2024-07-15 11:40:58.301055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.731 [2024-07-15 11:40:58.301065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.731 [2024-07-15 11:40:58.301073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.731 [2024-07-15 11:40:58.301081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33888 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33896 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33904 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33912 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33936 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33944 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33952 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33960 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33824 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33968 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.301524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.301531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.301539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33976 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.301548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.311871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.311884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.311895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33984 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.311904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.311915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.311922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.311931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33992 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.311940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.311950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.311958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.311966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34000 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.311976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.311985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.311993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.312001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34008 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.312011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.312021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.312030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.312038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34016 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.312048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.732 [2024-07-15 11:40:58.312059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.732 [2024-07-15 11:40:58.312067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.732 [2024-07-15 11:40:58.312075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34024 len:8 PRP1 0x0 PRP2 0x0 00:25:38.732 [2024-07-15 11:40:58.312085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34040 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34048 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34056 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34064 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34072 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34080 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34088 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34096 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34104 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34112 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34120 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34128 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34136 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34144 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34152 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34168 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.733 [2024-07-15 11:40:58.312746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.733 [2024-07-15 11:40:58.312754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34176 len:8 PRP1 0x0 PRP2 0x0 00:25:38.733 [2024-07-15 11:40:58.312764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.733 [2024-07-15 11:40:58.312774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34184 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34192 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34200 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.312967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.312976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.312986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.312994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:25:38.734 [2024-07-15 11:40:58.313512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.734 [2024-07-15 11:40:58.313521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.734 [2024-07-15 11:40:58.313529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.734 [2024-07-15 11:40:58.313537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34368 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34384 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.313966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.313977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.313984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.313992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.314002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.314011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.314019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.314027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.314036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.314046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.314054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.314064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.314074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.314084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.314091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.314099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.314108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.314119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.314126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.314134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.314143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.314153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.314160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.321355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.321374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.321389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.321399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.321411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.321424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.321441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.321452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.735 [2024-07-15 11:40:58.321463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:25:38.735 [2024-07-15 11:40:58.321476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.735 [2024-07-15 11:40:58.321490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.735 [2024-07-15 11:40:58.321500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.321953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.321966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.321980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.321990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.736 [2024-07-15 11:40:58.322587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.736 [2024-07-15 11:40:58.322597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.736 [2024-07-15 11:40:58.322609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:25:38.736 [2024-07-15 11:40:58.322623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.322935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.737 [2024-07-15 11:40:58.322946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.737 [2024-07-15 11:40:58.322957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:25:38.737 [2024-07-15 11:40:58.322976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:40:58.323035] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaf6150 was disconnected and freed. reset controller. 00:25:38.737 [2024-07-15 11:40:58.323051] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:38.737 [2024-07-15 11:40:58.323066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.737 [2024-07-15 11:40:58.323125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb01a30 (9): Bad file descriptor 00:25:38.737 [2024-07-15 11:40:58.330285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:38.737 [2024-07-15 11:40:58.536397] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:38.737 [2024-07-15 11:41:01.881197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.737 [2024-07-15 11:41:01.881591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.737 [2024-07-15 11:41:01.881603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.738 [2024-07-15 11:41:01.881890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.881934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.881954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.881976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.881988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.881997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.738 [2024-07-15 11:41:01.882380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.738 [2024-07-15 11:41:01.882391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.882979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.882988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.883000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.883009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.883020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.883030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.883041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.883050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.883062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.739 [2024-07-15 11:41:01.883074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.739 [2024-07-15 11:41:01.883086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.740 [2024-07-15 11:41:01.883584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120856 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120864 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120872 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120880 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120888 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120896 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120904 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120912 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120920 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.740 [2024-07-15 11:41:01.883933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120928 len:8 PRP1 0x0 PRP2 0x0 00:25:38.740 [2024-07-15 11:41:01.883943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.740 [2024-07-15 11:41:01.883953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.740 [2024-07-15 11:41:01.883960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.883967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120936 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.883976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.883985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.883993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120944 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120952 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120960 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120968 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120976 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120984 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.741 [2024-07-15 11:41:01.884192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.741 [2024-07-15 11:41:01.884199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120992 len:8 PRP1 0x0 PRP2 0x0 00:25:38.741 [2024-07-15 11:41:01.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884261] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb2e6f0 was disconnected and freed. reset controller. 00:25:38.741 [2024-07-15 11:41:01.884274] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:38.741 [2024-07-15 11:41:01.884299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:01.884309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:01.884328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:01.884347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:01.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:01.884377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.741 [2024-07-15 11:41:01.884413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb01a30 (9): Bad file descriptor 00:25:38.741 [2024-07-15 11:41:01.888623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:38.741 [2024-07-15 11:41:01.977374] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:38.741 [2024-07-15 11:41:06.433603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:06.433654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.433667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:06.433678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.433689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:06.433699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.433715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.741 [2024-07-15 11:41:06.433724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.433734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01a30 is same with the state(5) to be set 00:25:38.741 [2024-07-15 11:41:06.436352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.741 [2024-07-15 11:41:06.436627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.741 [2024-07-15 11:41:06.436639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.436980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.436990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.742 [2024-07-15 11:41:06.437193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.742 [2024-07-15 11:41:06.437202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.743 [2024-07-15 11:41:06.437529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.437988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.438009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.438018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.438030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.438039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.438051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.743 [2024-07-15 11:41:06.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.743 [2024-07-15 11:41:06.438071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.744 [2024-07-15 11:41:06.438563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104784 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104792 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104800 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104808 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104816 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104824 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104832 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104840 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104848 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104856 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104864 len:8 PRP1 0x0 PRP2 0x0 00:25:38.744 [2024-07-15 11:41:06.438974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.744 [2024-07-15 11:41:06.438984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.744 [2024-07-15 11:41:06.438991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.744 [2024-07-15 11:41:06.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104872 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104880 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104888 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104896 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104904 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104912 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104920 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104928 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104936 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104944 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104952 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104960 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104968 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.439445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.439452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.439460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104976 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.439469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.450005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.745 [2024-07-15 11:41:06.450021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.745 [2024-07-15 11:41:06.450034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:25:38.745 [2024-07-15 11:41:06.450046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.745 [2024-07-15 11:41:06.450102] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb31c10 was disconnected and freed. reset controller. 00:25:38.745 [2024-07-15 11:41:06.450117] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:38.745 [2024-07-15 11:41:06.450130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.745 [2024-07-15 11:41:06.450180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb01a30 (9): Bad file descriptor 00:25:38.745 [2024-07-15 11:41:06.456002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:38.745 [2024-07-15 11:41:06.620790] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:38.745 00:25:38.745 Latency(us) 00:25:38.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:38.745 Verification LBA range: start 0x0 length 0x4000 00:25:38.745 NVMe0n1 : 15.02 4961.01 19.38 936.41 0.00 21668.56 625.57 38606.66 00:25:38.745 =================================================================================================================== 00:25:38.745 Total : 4961.01 19.38 936.41 0.00 21668.56 625.57 38606.66 00:25:38.745 Received shutdown signal, test time was about 15.000000 seconds 00:25:38.745 00:25:38.745 Latency(us) 00:25:38.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.745 =================================================================================================================== 00:25:38.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2904657 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2904657 /var/tmp/bdevperf.sock 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2904657 ']' 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:38.745 11:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.745 [2024-07-15 11:41:13.073781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.745 11:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:39.003 [2024-07-15 11:41:13.334747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:39.003 11:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.259 NVMe0n1 00:25:39.259 11:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.825 00:25:39.825 11:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.083 00:25:40.084 11:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.084 11:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:40.341 11:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.599 11:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:43.881 11:41:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:43.881 11:41:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:43.881 11:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.881 11:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2905708 00:25:43.881 11:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2905708 00:25:45.351 0 00:25:45.351 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.351 [2024-07-15 11:41:12.579997] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:45.351 [2024-07-15 11:41:12.580061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904657 ] 00:25:45.351 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.351 [2024-07-15 11:41:12.661005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.351 [2024-07-15 11:41:12.742409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.351 [2024-07-15 11:41:14.908941] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:45.351 [2024-07-15 11:41:14.908994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.351 [2024-07-15 11:41:14.909009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.351 [2024-07-15 11:41:14.909020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.351 [2024-07-15 11:41:14.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.351 [2024-07-15 11:41:14.909041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.351 [2024-07-15 11:41:14.909051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.351 [2024-07-15 11:41:14.909061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.351 [2024-07-15 11:41:14.909071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.351 [2024-07-15 11:41:14.909081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.351 [2024-07-15 11:41:14.909111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.351 [2024-07-15 11:41:14.909128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfa30 (9): Bad file descriptor 00:25:45.351 [2024-07-15 11:41:14.922633] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:45.351 Running I/O for 1 seconds... 00:25:45.351 00:25:45.351 Latency(us) 00:25:45.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.351 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:45.351 Verification LBA range: start 0x0 length 0x4000 00:25:45.351 NVMe0n1 : 1.05 3653.00 14.27 0.00 0.00 33531.69 5034.36 50045.67 00:25:45.351 =================================================================================================================== 00:25:45.351 Total : 3653.00 14.27 0.00 0.00 33531.69 5034.36 50045.67 00:25:45.351 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:45.351 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:45.351 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.610 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:45.610 11:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:45.868 11:41:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.126 11:41:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2904657 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2904657 ']' 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2904657 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2904657 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2904657' 00:25:49.413 killing process with pid 2904657 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2904657 00:25:49.413 11:41:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2904657 00:25:49.671 11:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:49.671 11:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.930 rmmod nvme_tcp 00:25:49.930 rmmod nvme_fabrics 00:25:49.930 rmmod nvme_keyring 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2901226 ']' 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2901226 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2901226 ']' 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2901226 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901226 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901226' 00:25:49.930 killing process with pid 2901226 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2901226 00:25:49.930 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2901226 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.189 11:41:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.755 11:41:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.755 00:25:52.755 real 0m39.439s 00:25:52.755 user 2m7.547s 00:25:52.755 sys 0m7.783s 00:25:52.755 11:41:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.755 11:41:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.755 ************************************ 00:25:52.755 END TEST nvmf_failover 00:25:52.755 ************************************ 00:25:52.755 11:41:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.755 11:41:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:52.755 11:41:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.755 11:41:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.755 11:41:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.755 ************************************ 00:25:52.755 START TEST nvmf_host_discovery 00:25:52.755 ************************************ 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:52.755 * Looking for test storage... 00:25:52.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.755 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.756 11:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:58.027 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:58.027 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:58.027 Found net devices under 0000:af:00.0: cvl_0_0 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.027 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:58.028 Found net devices under 0000:af:00.1: cvl_0_1 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.028 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:25:58.286 00:25:58.286 --- 10.0.0.2 ping statistics --- 00:25:58.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.286 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:58.286 00:25:58.286 --- 10.0.0.1 ping statistics --- 00:25:58.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.286 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2910235 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2910235 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2910235 ']' 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.286 11:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.286 [2024-07-15 11:41:32.737756] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:58.286 [2024-07-15 11:41:32.737815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.544 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.544 [2024-07-15 11:41:32.822777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.544 [2024-07-15 11:41:32.925535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.544 [2024-07-15 11:41:32.925583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.544 [2024-07-15 11:41:32.925596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.544 [2024-07-15 11:41:32.925607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.544 [2024-07-15 11:41:32.925617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.544 [2024-07-15 11:41:32.925644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.479 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.479 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:59.479 11:41:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.479 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.479 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 [2024-07-15 11:41:33.982240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 [2024-07-15 11:41:33.994439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.738 11:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 null0 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 null1 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2910519 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2910519 /tmp/host.sock 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2910519 ']' 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:59.738 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.738 11:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.738 [2024-07-15 11:41:34.109492] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:25:59.738 [2024-07-15 11:41:34.109598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910519 ] 00:25:59.738 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.998 [2024-07-15 11:41:34.223770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.998 [2024-07-15 11:41:34.312118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:00.935 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.193 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 [2024-07-15 11:41:35.622995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.194 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:01.452 11:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:02.017 [2024-07-15 11:41:36.325440] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:02.017 [2024-07-15 11:41:36.325464] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:02.017 [2024-07-15 11:41:36.325483] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:02.017 [2024-07-15 11:41:36.412779] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:02.274 [2024-07-15 11:41:36.518670] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:02.274 [2024-07-15 11:41:36.518694] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.531 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:02.789 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:02.789 11:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.789 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 [2024-07-15 11:41:37.159550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:02.789 [2024-07-15 11:41:37.159898] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:02.789 [2024-07-15 11:41:37.159926] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.789 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.789 [2024-07-15 11:41:37.245694] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.047 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:03.048 11:41:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:03.048 [2024-07-15 11:41:37.352385] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:03.048 [2024-07-15 11:41:37.352414] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.048 [2024-07-15 11:41:37.352422] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.980 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.980 [2024-07-15 11:41:38.444090] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:03.980 [2024-07-15 11:41:38.444125] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:04.238 [2024-07-15 11:41:38.450472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.238 [2024-07-15 11:41:38.450501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.238 [2024-07-15 11:41:38.450514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.238 [2024-07-15 11:41:38.450524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.238 [2024-07-15 11:41:38.450535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.238 [2024-07-15 11:41:38.450545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.238 [2024-07-15 11:41:38.450555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.238 [2024-07-15 11:41:38.450570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.238 [2024-07-15 11:41:38.450579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.238 [2024-07-15 11:41:38.460490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.238 [2024-07-15 11:41:38.470530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.238 [2024-07-15 11:41:38.470769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.238 [2024-07-15 11:41:38.470787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.238 [2024-07-15 11:41:38.470798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 [2024-07-15 11:41:38.470814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.238 [2024-07-15 11:41:38.470828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.238 [2024-07-15 11:41:38.470836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.238 [2024-07-15 11:41:38.470847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.238 [2024-07-15 11:41:38.470861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.238 [2024-07-15 11:41:38.480596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.238 [2024-07-15 11:41:38.480888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.238 [2024-07-15 11:41:38.480907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.238 [2024-07-15 11:41:38.480918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 [2024-07-15 11:41:38.480933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.238 [2024-07-15 11:41:38.480947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.238 [2024-07-15 11:41:38.480956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.238 [2024-07-15 11:41:38.480965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.238 [2024-07-15 11:41:38.480978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.238 [2024-07-15 11:41:38.490660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.238 [2024-07-15 11:41:38.490874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.238 [2024-07-15 11:41:38.490891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.238 [2024-07-15 11:41:38.490902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 [2024-07-15 11:41:38.490920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.238 [2024-07-15 11:41:38.490934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.238 [2024-07-15 11:41:38.490942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.238 [2024-07-15 11:41:38.490951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.238 [2024-07-15 11:41:38.490965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.238 [2024-07-15 11:41:38.500724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.238 [2024-07-15 11:41:38.500988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.238 [2024-07-15 11:41:38.501005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.238 [2024-07-15 11:41:38.501014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 [2024-07-15 11:41:38.501029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.238 [2024-07-15 11:41:38.501042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.238 [2024-07-15 11:41:38.501051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.238 [2024-07-15 11:41:38.501060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.238 [2024-07-15 11:41:38.501072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.238 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.238 [2024-07-15 11:41:38.510784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.238 [2024-07-15 11:41:38.511049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.238 [2024-07-15 11:41:38.511066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.238 [2024-07-15 11:41:38.511076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.238 [2024-07-15 11:41:38.511091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.511109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.511118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.511126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.511140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 [2024-07-15 11:41:38.520845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.521141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.521159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.521169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.521184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.521197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.521205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.521214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.521227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 [2024-07-15 11:41:38.530909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.531180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.531197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.531206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.531221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.531234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.531243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.531252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.531273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 [2024-07-15 11:41:38.540971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.541265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.541283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.541292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.541307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.541320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.541328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.541337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.541351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.239 [2024-07-15 11:41:38.551029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.551267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.551284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.551294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.551308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.551321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.551330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.551339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.551352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:04.239 [2024-07-15 11:41:38.561089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.561368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.561385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.561394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.561409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.561422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.561431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.561440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.561453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.239 [2024-07-15 11:41:38.571149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:04.239 [2024-07-15 11:41:38.571425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.239 [2024-07-15 11:41:38.571442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278470 with addr=10.0.0.2, port=4420 00:26:04.239 [2024-07-15 11:41:38.571452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278470 is same with the state(5) to be set 00:26:04.239 [2024-07-15 11:41:38.571467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278470 (9): Bad file descriptor 00:26:04.239 [2024-07-15 11:41:38.571480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:04.239 [2024-07-15 11:41:38.571489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:04.239 [2024-07-15 11:41:38.571498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:04.239 [2024-07-15 11:41:38.571511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.239 [2024-07-15 11:41:38.572796] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:04.239 [2024-07-15 11:41:38.572817] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:04.239 11:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.170 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.427 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.428 11:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.797 [2024-07-15 11:41:40.948984] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:06.797 [2024-07-15 11:41:40.949009] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:06.798 [2024-07-15 11:41:40.949025] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:06.798 [2024-07-15 11:41:41.037327] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:07.055 [2024-07-15 11:41:41.306967] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:07.055 [2024-07-15 11:41:41.307006] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.055 request: 00:26:07.055 { 00:26:07.055 "name": "nvme", 00:26:07.055 "trtype": "tcp", 00:26:07.055 "traddr": "10.0.0.2", 00:26:07.055 "adrfam": "ipv4", 00:26:07.055 "trsvcid": "8009", 00:26:07.055 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:07.055 "wait_for_attach": true, 00:26:07.055 "method": "bdev_nvme_start_discovery", 00:26:07.055 "req_id": 1 00:26:07.055 } 00:26:07.055 Got JSON-RPC error response 00:26:07.055 response: 00:26:07.055 { 00:26:07.055 "code": -17, 00:26:07.055 "message": "File exists" 00:26:07.055 } 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.055 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.056 request: 00:26:07.056 { 00:26:07.056 "name": "nvme_second", 00:26:07.056 "trtype": "tcp", 00:26:07.056 "traddr": "10.0.0.2", 00:26:07.056 "adrfam": "ipv4", 00:26:07.056 "trsvcid": "8009", 00:26:07.056 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:07.056 "wait_for_attach": true, 00:26:07.056 "method": "bdev_nvme_start_discovery", 00:26:07.056 "req_id": 1 00:26:07.056 } 00:26:07.056 Got JSON-RPC error response 00:26:07.056 response: 00:26:07.056 { 00:26:07.056 "code": -17, 00:26:07.056 "message": "File exists" 00:26:07.056 } 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.056 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.313 11:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.246 [2024-07-15 11:41:42.570616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.246 [2024-07-15 11:41:42.570650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1291ce0 with addr=10.0.0.2, port=8010 00:26:08.246 [2024-07-15 11:41:42.570667] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:08.246 [2024-07-15 11:41:42.570676] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:08.246 [2024-07-15 11:41:42.570684] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:09.178 [2024-07-15 11:41:43.573073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.178 [2024-07-15 11:41:43.573105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1291ce0 with addr=10.0.0.2, port=8010 00:26:09.178 [2024-07-15 11:41:43.573121] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:09.178 [2024-07-15 11:41:43.573130] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:09.178 [2024-07-15 11:41:43.573138] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:10.551 [2024-07-15 11:41:44.575199] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:10.551 request: 00:26:10.551 { 00:26:10.551 "name": "nvme_second", 00:26:10.551 "trtype": "tcp", 00:26:10.551 "traddr": "10.0.0.2", 00:26:10.551 "adrfam": "ipv4", 00:26:10.551 "trsvcid": "8010", 00:26:10.551 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:10.551 "wait_for_attach": false, 00:26:10.551 "attach_timeout_ms": 3000, 00:26:10.551 "method": "bdev_nvme_start_discovery", 00:26:10.551 "req_id": 1 00:26:10.551 } 00:26:10.551 Got JSON-RPC error response 00:26:10.551 response: 00:26:10.551 { 00:26:10.551 "code": -110, 00:26:10.551 "message": "Connection timed out" 00:26:10.551 } 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2910519 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.551 rmmod nvme_tcp 00:26:10.551 rmmod nvme_fabrics 00:26:10.551 rmmod nvme_keyring 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2910235 ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2910235 ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910235' 00:26:10.551 killing process with pid 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2910235 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.551 11:41:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.085 11:41:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.086 00:26:13.086 real 0m20.298s 00:26:13.086 user 0m27.035s 00:26:13.086 sys 0m5.925s 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.086 ************************************ 00:26:13.086 END TEST nvmf_host_discovery 00:26:13.086 ************************************ 00:26:13.086 11:41:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:13.086 11:41:47 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:13.086 11:41:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.086 11:41:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.086 11:41:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.086 ************************************ 00:26:13.086 START TEST nvmf_host_multipath_status 00:26:13.086 ************************************ 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:13.086 * Looking for test storage... 00:26:13.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.086 11:41:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.358 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.358 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.358 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.358 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.358 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.359 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:26:18.618 00:26:18.618 --- 10.0.0.2 ping statistics --- 00:26:18.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.618 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:26:18.618 00:26:18.618 --- 10.0.0.1 ping statistics --- 00:26:18.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.618 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.618 11:41:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2916172 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2916172 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2916172 ']' 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.618 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.618 [2024-07-15 11:41:53.054422] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:26:18.618 [2024-07-15 11:41:53.054479] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.877 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.877 [2024-07-15 11:41:53.139307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:18.877 [2024-07-15 11:41:53.230121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.877 [2024-07-15 11:41:53.230165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.877 [2024-07-15 11:41:53.230176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.877 [2024-07-15 11:41:53.230185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.877 [2024-07-15 11:41:53.230192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.877 [2024-07-15 11:41:53.230245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.877 [2024-07-15 11:41:53.230248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.877 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.877 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:18.877 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.877 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.877 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.135 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.135 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2916172 00:26:19.135 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:19.135 [2024-07-15 11:41:53.594292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.393 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:19.651 Malloc0 00:26:19.651 11:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:19.908 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.166 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.424 [2024-07-15 11:41:54.658351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.424 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:20.682 [2024-07-15 11:41:54.923172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2916530 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2916530 /var/tmp/bdevperf.sock 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2916530 ']' 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.682 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.683 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.683 11:41:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.616 11:41:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.616 11:41:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:21.616 11:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:21.874 11:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:22.132 Nvme0n1 00:26:22.390 11:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:22.649 Nvme0n1 00:26:22.649 11:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:22.649 11:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:24.596 11:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:24.596 11:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:24.884 11:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:25.142 11:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:26.075 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:26.075 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:26.075 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.075 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.333 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.333 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:26.333 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.333 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.591 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.591 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.591 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.591 11:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.849 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.849 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.849 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.849 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.106 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.106 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.106 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.106 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.363 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.363 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.363 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.363 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.621 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.621 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:27.621 11:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:27.879 11:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.879 11:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.250 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.507 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.507 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.507 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.507 11:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.764 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.764 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.764 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.764 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.021 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.021 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.021 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.021 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.279 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.279 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:30.279 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.279 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.537 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.537 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:30.537 11:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:30.795 11:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:31.053 11:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:31.987 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:31.987 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:31.987 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.987 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.245 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.245 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:32.245 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.245 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.503 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.503 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.503 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.503 11:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.761 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.761 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.019 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.019 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.277 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.535 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.535 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:33.535 11:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:33.794 11:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:34.053 11:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.426 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:35.427 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.427 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.684 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.684 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.684 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.684 11:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.942 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.200 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.200 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:36.200 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.200 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.458 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.458 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:36.458 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:36.458 11:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:36.716 11:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.085 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.342 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.342 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.342 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.342 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.599 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.599 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.599 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.599 11:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.855 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.855 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:38.855 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.855 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:39.111 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:39.674 11:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:39.674 11:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.047 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:41.305 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.305 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:41.305 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.305 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.562 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.562 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.562 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.562 11:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.848 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.848 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:41.848 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.848 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.106 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.106 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.106 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.106 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.364 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.364 11:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:42.622 11:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:42.622 11:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:42.880 11:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:43.137 11:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:44.071 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:44.071 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.071 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.071 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.329 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.329 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:44.329 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.329 11:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.586 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.586 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.586 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.586 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.843 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.843 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.843 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:44.843 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.408 11:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.666 11:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.666 11:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:45.666 11:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:45.924 11:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.182 11:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.557 11:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.814 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.814 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.814 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.814 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.072 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.072 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.072 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.072 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.072 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.073 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.073 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.073 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.330 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.330 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.330 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.330 11:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.588 11:42:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.588 11:42:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:48.588 11:42:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.856 11:42:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.115 11:42:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.490 11:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.747 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.747 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.747 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.747 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.004 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.004 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.004 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.004 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.260 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.260 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.260 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.260 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.517 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.517 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.517 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.517 11:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.775 11:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.775 11:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:51.775 11:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.032 11:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:52.290 11:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:53.221 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:53.221 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.221 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.222 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.478 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.478 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.478 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.478 11:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.735 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.735 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.735 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.735 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:54.004 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.005 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:54.005 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.005 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.289 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.289 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.289 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.289 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.593 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.593 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:54.593 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.593 11:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2916530 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2916530 ']' 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2916530 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2916530 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2916530' 00:26:54.853 killing process with pid 2916530 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2916530 00:26:54.853 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2916530 00:26:55.136 Connection closed with partial response: 00:26:55.136 00:26:55.136 00:26:55.136 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2916530 00:26:55.136 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:55.136 [2024-07-15 11:41:55.005631] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:26:55.136 [2024-07-15 11:41:55.005692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916530 ] 00:26:55.136 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.136 [2024-07-15 11:41:55.118038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.136 [2024-07-15 11:41:55.263021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.136 Running I/O for 90 seconds... 00:26:55.136 [2024-07-15 11:42:10.900326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.136 [2024-07-15 11:42:10.900403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.900968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.900989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.901958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.901998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.136 [2024-07-15 11:42:10.902338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.136 [2024-07-15 11:42:10.902378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.902872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.902892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.904868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.904910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.904956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.904980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.905043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.905107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.905169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.905233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.905939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.905980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.137 [2024-07-15 11:42:10.906271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.137 [2024-07-15 11:42:10.906937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.137 [2024-07-15 11:42:10.906959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.906998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.138 [2024-07-15 11:42:10.907901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.907941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.907962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.908949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.908971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.909338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.909359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.911183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.911293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.911318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.911358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.911380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.138 [2024-07-15 11:42:10.911420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.138 [2024-07-15 11:42:10.911443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.911988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.139 [2024-07-15 11:42:10.912335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.912951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.912974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.913967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.913990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.914029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.139 [2024-07-15 11:42:10.914051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.139 [2024-07-15 11:42:10.914091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.914754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.916748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.916874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.916980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.140 [2024-07-15 11:42:10.917711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.140 [2024-07-15 11:42:10.917772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.140 [2024-07-15 11:42:10.917813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.917835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.917875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.917897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.917937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.917958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.917998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.918955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.141 [2024-07-15 11:42:10.919424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.919969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.919991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.141 [2024-07-15 11:42:10.920422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.141 [2024-07-15 11:42:10.920444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.920797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.920819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.922662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.922699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.922745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.922768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.922809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.922832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.922872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.922895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.922936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.922957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.142 [2024-07-15 11:42:10.923864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.923989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.142 [2024-07-15 11:42:10.924924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.142 [2024-07-15 11:42:10.924946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.924987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.925942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.925963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.926004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.926026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.926089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.926134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.926156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.926198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.926221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.927783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.927822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.927867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.927890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.927930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.927953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.927994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.928016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.928079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.928204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.143 [2024-07-15 11:42:10.928282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.928960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.928983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.929046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.929087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.143 [2024-07-15 11:42:10.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.143 [2024-07-15 11:42:10.929150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.929954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.929995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.930949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.930991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.144 [2024-07-15 11:42:10.931014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.144 [2024-07-15 11:42:10.931411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.144 [2024-07-15 11:42:10.931452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.931973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.932367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.932391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.934940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.934964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.145 [2024-07-15 11:42:10.935495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.145 [2024-07-15 11:42:10.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.145 [2024-07-15 11:42:10.935990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.936942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.936983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.937836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.937860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.939967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.146 [2024-07-15 11:42:10.939990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.940031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.146 [2024-07-15 11:42:10.940054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.940096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.146 [2024-07-15 11:42:10.940120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.940161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.146 [2024-07-15 11:42:10.940185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.940226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.146 [2024-07-15 11:42:10.940248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.146 [2024-07-15 11:42:10.940299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.940952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.940976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.147 [2024-07-15 11:42:10.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.941958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.941999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.147 [2024-07-15 11:42:10.942754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.147 [2024-07-15 11:42:10.942818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.147 [2024-07-15 11:42:10.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.147 [2024-07-15 11:42:10.942947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.147 [2024-07-15 11:42:10.942987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.943971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.943995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.944037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.944060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.945866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.945903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.945950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.945974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.946967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.947008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.947032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.947073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.947096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.148 [2024-07-15 11:42:10.947136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.148 [2024-07-15 11:42:10.947160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.149 [2024-07-15 11:42:10.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.947944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.947985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.948971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.948994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.949491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.949514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.951071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.951110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.951160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.951186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.951227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.149 [2024-07-15 11:42:10.951251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.149 [2024-07-15 11:42:10.951303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.951715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.951780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.951845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.951909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.951951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.951979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.150 [2024-07-15 11:42:10.952767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.952962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.953990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.150 [2024-07-15 11:42:10.954014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.150 [2024-07-15 11:42:10.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.151 [2024-07-15 11:42:10.954485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.954976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.954999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.956936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.956992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.151 [2024-07-15 11:42:10.957585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.151 [2024-07-15 11:42:10.957646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.957671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.957726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.957806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.957829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.957885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.957910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.957965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.957989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.152 [2024-07-15 11:42:10.958069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.958938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.958962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.959936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.959992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.152 [2024-07-15 11:42:10.960559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.152 [2024-07-15 11:42:10.960588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:10.960645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:10.960668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:10.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:10.960748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:10.960805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:10.960829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:10.961215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:10.961245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.580887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.580961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.581041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.581765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.581828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.581891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.581952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.581992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.153 [2024-07-15 11:42:26.582716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.582942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.582964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.587515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.587578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.587640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.153 [2024-07-15 11:42:26.587704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.153 [2024-07-15 11:42:26.587744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.587766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.587807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.587829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.587870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.587892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.587932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.587955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.587995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.588790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.588979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.589046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.589110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.589173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.589236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.589314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.589357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.589380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.591553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.591943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.591984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.592007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.592070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.592133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.592196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.592269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.592333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.154 [2024-07-15 11:42:26.592396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.154 [2024-07-15 11:42:26.592437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.154 [2024-07-15 11:42:26.592460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.592502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.592524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.594912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.594954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.595295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.595357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.595421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.595868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.595931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.595971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.595993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.596896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.596936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.596959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.597023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.597086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.597149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.597212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.597289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.155 [2024-07-15 11:42:26.597352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.597398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.597421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.601295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.601342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.155 [2024-07-15 11:42:26.601389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.155 [2024-07-15 11:42:26.601412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.601476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.601961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.601983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.602899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.602939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.602961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.603150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.603594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.603621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.608849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.608894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.156 [2024-07-15 11:42:26.608917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.608959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.608982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.609021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.609043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.609106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.609147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.609169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.156 [2024-07-15 11:42:26.609232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.156 [2024-07-15 11:42:26.609286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.609372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.609436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.609499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.609886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.609989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.610075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.610138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.610636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.610658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.612155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.612224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.157 [2024-07-15 11:42:26.612756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.612818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.612880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.612943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.157 [2024-07-15 11:42:26.612983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.157 [2024-07-15 11:42:26.613005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.613069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.613110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.613132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.616747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.616874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.616977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.616999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.617833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.617957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.618019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.618061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.618082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.618123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.618145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.618189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.158 [2024-07-15 11:42:26.618213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.618263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.618288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.623967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.158 [2024-07-15 11:42:26.624030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.158 [2024-07-15 11:42:26.624070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.624965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.624989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.625050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.625112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.625174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.625236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.625309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.625372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.625435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.625477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.625500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.628686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.628748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.628811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.628874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.628977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.628999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.629062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.629128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.629191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.159 [2024-07-15 11:42:26.629268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.159 [2024-07-15 11:42:26.629332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.159 [2024-07-15 11:42:26.629373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.629645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.629708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.629942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.629963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.630003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.630025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.630065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.630087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.630127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.630149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.630190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.630213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.630262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.630286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.633678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.633771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.633834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.633896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.633959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.633999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.634417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.634479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.634667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.634860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.634962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.634985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.635026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.635049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.635088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.635111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.635153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.636211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.636251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.636309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.160 [2024-07-15 11:42:26.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.636373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.160 [2024-07-15 11:42:26.636396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:55.160 [2024-07-15 11:42:26.636436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.161 [2024-07-15 11:42:26.636458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.161 [2024-07-15 11:42:26.636521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.161 [2024-07-15 11:42:26.636584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.161 [2024-07-15 11:42:26.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.161 [2024-07-15 11:42:26.636717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.161 [2024-07-15 11:42:26.636780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.161 [2024-07-15 11:42:26.636844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.161 [2024-07-15 11:42:26.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.161 [2024-07-15 11:42:26.636907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.161 Received shutdown signal, test time was about 32.042548 seconds 00:26:55.161 00:26:55.161 Latency(us) 00:26:55.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.161 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:55.161 Verification LBA range: start 0x0 length 0x4000 00:26:55.161 Nvme0n1 : 32.04 4644.74 18.14 0.00 0.00 27481.34 1690.53 4087539.90 00:26:55.161 =================================================================================================================== 00:26:55.161 Total : 4644.74 18.14 0.00 0.00 27481.34 1690.53 4087539.90 00:26:55.161 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:55.420 rmmod nvme_tcp 00:26:55.420 rmmod nvme_fabrics 00:26:55.420 rmmod nvme_keyring 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2916172 ']' 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2916172 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2916172 ']' 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2916172 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.420 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2916172 00:26:55.695 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:55.695 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:55.695 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2916172' 00:26:55.695 killing process with pid 2916172 00:26:55.695 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2916172 00:26:55.695 11:42:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2916172 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.695 11:42:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.226 11:42:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.226 00:26:58.226 real 0m45.099s 00:26:58.226 user 2m8.027s 00:26:58.226 sys 0m11.124s 00:26:58.226 11:42:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.226 11:42:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.226 ************************************ 00:26:58.226 END TEST nvmf_host_multipath_status 00:26:58.226 ************************************ 00:26:58.226 11:42:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:58.226 11:42:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:58.226 11:42:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:58.226 11:42:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.226 11:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.226 ************************************ 00:26:58.226 START TEST nvmf_discovery_remove_ifc 00:26:58.226 ************************************ 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:58.226 * Looking for test storage... 00:26:58.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.226 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.227 11:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.506 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:03.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:03.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:03.507 Found net devices under 0000:af:00.0: cvl_0_0 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:03.507 Found net devices under 0000:af:00.1: cvl_0_1 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.507 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.766 11:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:03.766 00:27:03.766 --- 10.0.0.2 ping statistics --- 00:27:03.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.766 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:03.766 00:27:03.766 --- 10.0.0.1 ping statistics --- 00:27:03.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.766 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.766 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2926753 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2926753 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2926753 ']' 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.025 11:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.025 [2024-07-15 11:42:38.312104] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:27:04.025 [2024-07-15 11:42:38.312160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.025 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.025 [2024-07-15 11:42:38.397472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.284 [2024-07-15 11:42:38.500636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.284 [2024-07-15 11:42:38.500685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.284 [2024-07-15 11:42:38.500703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.284 [2024-07-15 11:42:38.500714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.284 [2024-07-15 11:42:38.500724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.284 [2024-07-15 11:42:38.500755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.851 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.851 [2024-07-15 11:42:39.296470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.851 [2024-07-15 11:42:39.304638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:05.110 null0 00:27:05.110 [2024-07-15 11:42:39.336643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2927002 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2927002 /tmp/host.sock 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2927002 ']' 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:05.110 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.110 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.110 [2024-07-15 11:42:39.409859] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:27:05.110 [2024-07-15 11:42:39.409916] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927002 ] 00:27:05.110 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.110 [2024-07-15 11:42:39.493027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.370 [2024-07-15 11:42:39.579531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.370 11:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.305 [2024-07-15 11:42:40.733615] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.305 [2024-07-15 11:42:40.733642] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.305 [2024-07-15 11:42:40.733658] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.565 [2024-07-15 11:42:40.860111] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:06.824 [2024-07-15 11:42:41.038165] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:06.824 [2024-07-15 11:42:41.038218] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:06.824 [2024-07-15 11:42:41.038246] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:06.825 [2024-07-15 11:42:41.038269] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:06.825 [2024-07-15 11:42:41.038293] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.825 [2024-07-15 11:42:41.042908] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24c9370 was disconnected and freed. delete nvme_qpair. 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.825 11:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.201 11:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.136 11:42:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:10.072 11:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.010 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.269 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.269 11:42:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.205 [2024-07-15 11:42:46.479084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:12.206 [2024-07-15 11:42:46.479133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.206 [2024-07-15 11:42:46.479147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.206 [2024-07-15 11:42:46.479161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.206 [2024-07-15 11:42:46.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.206 [2024-07-15 11:42:46.479182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.206 [2024-07-15 11:42:46.479192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.206 [2024-07-15 11:42:46.479203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.206 [2024-07-15 11:42:46.479214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.206 [2024-07-15 11:42:46.479224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.206 [2024-07-15 11:42:46.479235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.206 [2024-07-15 11:42:46.479244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc00 is same with the state(5) to be set 00:27:12.206 [2024-07-15 11:42:46.489103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc00 (9): Bad file descriptor 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.206 11:42:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.206 [2024-07-15 11:42:46.499217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.143 [2024-07-15 11:42:47.554315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:13.143 [2024-07-15 11:42:47.554400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc00 with addr=10.0.0.2, port=4420 00:27:13.143 [2024-07-15 11:42:47.554442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc00 is same with the state(5) to be set 00:27:13.143 [2024-07-15 11:42:47.554495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc00 (9): Bad file descriptor 00:27:13.143 [2024-07-15 11:42:47.554625] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:13.143 [2024-07-15 11:42:47.554665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.143 [2024-07-15 11:42:47.554688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.143 [2024-07-15 11:42:47.554711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.143 [2024-07-15 11:42:47.554751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.143 [2024-07-15 11:42:47.554775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.143 11:42:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.143 11:42:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:13.143 11:42:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:14.523 [2024-07-15 11:42:48.557273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.523 [2024-07-15 11:42:48.557299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.523 [2024-07-15 11:42:48.557309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.523 [2024-07-15 11:42:48.557320] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:14.523 [2024-07-15 11:42:48.557335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.523 [2024-07-15 11:42:48.557358] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:14.523 [2024-07-15 11:42:48.557382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.523 [2024-07-15 11:42:48.557394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.523 [2024-07-15 11:42:48.557407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.523 [2024-07-15 11:42:48.557418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.523 [2024-07-15 11:42:48.557429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.523 [2024-07-15 11:42:48.557438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.523 [2024-07-15 11:42:48.557450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.523 [2024-07-15 11:42:48.557460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.523 [2024-07-15 11:42:48.557471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.523 [2024-07-15 11:42:48.557481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.523 [2024-07-15 11:42:48.557491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:14.523 [2024-07-15 11:42:48.558147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248f080 (9): Bad file descriptor 00:27:14.523 [2024-07-15 11:42:48.559158] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:14.523 [2024-07-15 11:42:48.559174] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:14.523 11:42:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.459 11:42:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.393 [2024-07-15 11:42:50.611453] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.393 [2024-07-15 11:42:50.611477] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.394 [2024-07-15 11:42:50.611496] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.394 [2024-07-15 11:42:50.697783] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:16.394 [2024-07-15 11:42:50.801990] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:16.394 [2024-07-15 11:42:50.802034] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:16.394 [2024-07-15 11:42:50.802059] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:16.394 [2024-07-15 11:42:50.802075] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:16.394 [2024-07-15 11:42:50.802084] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:16.394 [2024-07-15 11:42:50.809073] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2496920 was disconnected and freed. delete nvme_qpair. 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.394 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2927002 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2927002 ']' 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2927002 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927002 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927002' 00:27:16.652 killing process with pid 2927002 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2927002 00:27:16.652 11:42:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2927002 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.911 rmmod nvme_tcp 00:27:16.911 rmmod nvme_fabrics 00:27:16.911 rmmod nvme_keyring 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2926753 ']' 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2926753 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2926753 ']' 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2926753 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2926753 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2926753' 00:27:16.911 killing process with pid 2926753 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2926753 00:27:16.911 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2926753 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.169 11:42:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.068 11:42:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.068 00:27:19.068 real 0m21.235s 00:27:19.068 user 0m26.074s 00:27:19.068 sys 0m5.643s 00:27:19.068 11:42:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.068 11:42:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.068 ************************************ 00:27:19.068 END TEST nvmf_discovery_remove_ifc 00:27:19.068 ************************************ 00:27:19.327 11:42:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:19.327 11:42:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.327 11:42:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:19.327 11:42:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.327 11:42:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.327 ************************************ 00:27:19.327 START TEST nvmf_identify_kernel_target 00:27:19.327 ************************************ 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.327 * Looking for test storage... 00:27:19.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.327 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.328 11:42:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:24.603 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:24.603 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:24.603 Found net devices under 0000:af:00.0: cvl_0_0 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.603 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:24.604 Found net devices under 0000:af:00.1: cvl_0_1 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.604 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:27:24.863 00:27:24.863 --- 10.0.0.2 ping statistics --- 00:27:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.863 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:24.863 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:25.122 00:27:25.122 --- 10.0.0.1 ping statistics --- 00:27:25.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.122 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:25.122 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:25.123 11:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.660 Waiting for block devices as requested 00:27:27.919 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:27.919 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:27.919 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:28.178 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:28.178 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:28.178 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:28.437 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:28.437 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:28.437 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:28.437 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:28.696 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:28.696 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:28.696 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:28.955 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:28.955 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:28.955 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:28.955 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:29.214 No valid GPT data, bailing 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:29.214 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:29.473 00:27:29.473 Discovery Log Number of Records 2, Generation counter 2 00:27:29.473 =====Discovery Log Entry 0====== 00:27:29.473 trtype: tcp 00:27:29.473 adrfam: ipv4 00:27:29.473 subtype: current discovery subsystem 00:27:29.473 treq: not specified, sq flow control disable supported 00:27:29.473 portid: 1 00:27:29.473 trsvcid: 4420 00:27:29.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:29.473 traddr: 10.0.0.1 00:27:29.473 eflags: none 00:27:29.473 sectype: none 00:27:29.473 =====Discovery Log Entry 1====== 00:27:29.473 trtype: tcp 00:27:29.473 adrfam: ipv4 00:27:29.473 subtype: nvme subsystem 00:27:29.473 treq: not specified, sq flow control disable supported 00:27:29.473 portid: 1 00:27:29.473 trsvcid: 4420 00:27:29.473 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:29.473 traddr: 10.0.0.1 00:27:29.473 eflags: none 00:27:29.473 sectype: none 00:27:29.473 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:29.473 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:29.473 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.473 ===================================================== 00:27:29.473 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:29.473 ===================================================== 00:27:29.473 Controller Capabilities/Features 00:27:29.473 ================================ 00:27:29.473 Vendor ID: 0000 00:27:29.473 Subsystem Vendor ID: 0000 00:27:29.473 Serial Number: 3de3e870d4d36c030ce9 00:27:29.473 Model Number: Linux 00:27:29.473 Firmware Version: 6.7.0-68 00:27:29.473 Recommended Arb Burst: 0 00:27:29.473 IEEE OUI Identifier: 00 00 00 00:27:29.473 Multi-path I/O 00:27:29.473 May have multiple subsystem ports: No 00:27:29.473 May have multiple controllers: No 00:27:29.473 Associated with SR-IOV VF: No 00:27:29.473 Max Data Transfer Size: Unlimited 00:27:29.473 Max Number of Namespaces: 0 00:27:29.473 Max Number of I/O Queues: 1024 00:27:29.473 NVMe Specification Version (VS): 1.3 00:27:29.473 NVMe Specification Version (Identify): 1.3 00:27:29.473 Maximum Queue Entries: 1024 00:27:29.473 Contiguous Queues Required: No 00:27:29.473 Arbitration Mechanisms Supported 00:27:29.473 Weighted Round Robin: Not Supported 00:27:29.473 Vendor Specific: Not Supported 00:27:29.473 Reset Timeout: 7500 ms 00:27:29.473 Doorbell Stride: 4 bytes 00:27:29.473 NVM Subsystem Reset: Not Supported 00:27:29.473 Command Sets Supported 00:27:29.473 NVM Command Set: Supported 00:27:29.473 Boot Partition: Not Supported 00:27:29.473 Memory Page Size Minimum: 4096 bytes 00:27:29.473 Memory Page Size Maximum: 4096 bytes 00:27:29.473 Persistent Memory Region: Not Supported 00:27:29.473 Optional Asynchronous Events Supported 00:27:29.473 Namespace Attribute Notices: Not Supported 00:27:29.473 Firmware Activation Notices: Not Supported 00:27:29.473 ANA Change Notices: Not Supported 00:27:29.473 PLE Aggregate Log Change Notices: Not Supported 00:27:29.473 LBA Status Info Alert Notices: Not Supported 00:27:29.473 EGE Aggregate Log Change Notices: Not Supported 00:27:29.473 Normal NVM Subsystem Shutdown event: Not Supported 00:27:29.473 Zone Descriptor Change Notices: Not Supported 00:27:29.473 Discovery Log Change Notices: Supported 00:27:29.473 Controller Attributes 00:27:29.473 128-bit Host Identifier: Not Supported 00:27:29.473 Non-Operational Permissive Mode: Not Supported 00:27:29.473 NVM Sets: Not Supported 00:27:29.473 Read Recovery Levels: Not Supported 00:27:29.473 Endurance Groups: Not Supported 00:27:29.474 Predictable Latency Mode: Not Supported 00:27:29.474 Traffic Based Keep ALive: Not Supported 00:27:29.474 Namespace Granularity: Not Supported 00:27:29.474 SQ Associations: Not Supported 00:27:29.474 UUID List: Not Supported 00:27:29.474 Multi-Domain Subsystem: Not Supported 00:27:29.474 Fixed Capacity Management: Not Supported 00:27:29.474 Variable Capacity Management: Not Supported 00:27:29.474 Delete Endurance Group: Not Supported 00:27:29.474 Delete NVM Set: Not Supported 00:27:29.474 Extended LBA Formats Supported: Not Supported 00:27:29.474 Flexible Data Placement Supported: Not Supported 00:27:29.474 00:27:29.474 Controller Memory Buffer Support 00:27:29.474 ================================ 00:27:29.474 Supported: No 00:27:29.474 00:27:29.474 Persistent Memory Region Support 00:27:29.474 ================================ 00:27:29.474 Supported: No 00:27:29.474 00:27:29.474 Admin Command Set Attributes 00:27:29.474 ============================ 00:27:29.474 Security Send/Receive: Not Supported 00:27:29.474 Format NVM: Not Supported 00:27:29.474 Firmware Activate/Download: Not Supported 00:27:29.474 Namespace Management: Not Supported 00:27:29.474 Device Self-Test: Not Supported 00:27:29.474 Directives: Not Supported 00:27:29.474 NVMe-MI: Not Supported 00:27:29.474 Virtualization Management: Not Supported 00:27:29.474 Doorbell Buffer Config: Not Supported 00:27:29.474 Get LBA Status Capability: Not Supported 00:27:29.474 Command & Feature Lockdown Capability: Not Supported 00:27:29.474 Abort Command Limit: 1 00:27:29.474 Async Event Request Limit: 1 00:27:29.474 Number of Firmware Slots: N/A 00:27:29.474 Firmware Slot 1 Read-Only: N/A 00:27:29.474 Firmware Activation Without Reset: N/A 00:27:29.474 Multiple Update Detection Support: N/A 00:27:29.474 Firmware Update Granularity: No Information Provided 00:27:29.474 Per-Namespace SMART Log: No 00:27:29.474 Asymmetric Namespace Access Log Page: Not Supported 00:27:29.474 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:29.474 Command Effects Log Page: Not Supported 00:27:29.474 Get Log Page Extended Data: Supported 00:27:29.474 Telemetry Log Pages: Not Supported 00:27:29.474 Persistent Event Log Pages: Not Supported 00:27:29.474 Supported Log Pages Log Page: May Support 00:27:29.474 Commands Supported & Effects Log Page: Not Supported 00:27:29.474 Feature Identifiers & Effects Log Page:May Support 00:27:29.474 NVMe-MI Commands & Effects Log Page: May Support 00:27:29.474 Data Area 4 for Telemetry Log: Not Supported 00:27:29.474 Error Log Page Entries Supported: 1 00:27:29.474 Keep Alive: Not Supported 00:27:29.474 00:27:29.474 NVM Command Set Attributes 00:27:29.474 ========================== 00:27:29.474 Submission Queue Entry Size 00:27:29.474 Max: 1 00:27:29.474 Min: 1 00:27:29.474 Completion Queue Entry Size 00:27:29.474 Max: 1 00:27:29.474 Min: 1 00:27:29.474 Number of Namespaces: 0 00:27:29.474 Compare Command: Not Supported 00:27:29.474 Write Uncorrectable Command: Not Supported 00:27:29.474 Dataset Management Command: Not Supported 00:27:29.474 Write Zeroes Command: Not Supported 00:27:29.474 Set Features Save Field: Not Supported 00:27:29.474 Reservations: Not Supported 00:27:29.474 Timestamp: Not Supported 00:27:29.474 Copy: Not Supported 00:27:29.474 Volatile Write Cache: Not Present 00:27:29.474 Atomic Write Unit (Normal): 1 00:27:29.474 Atomic Write Unit (PFail): 1 00:27:29.474 Atomic Compare & Write Unit: 1 00:27:29.474 Fused Compare & Write: Not Supported 00:27:29.474 Scatter-Gather List 00:27:29.474 SGL Command Set: Supported 00:27:29.474 SGL Keyed: Not Supported 00:27:29.474 SGL Bit Bucket Descriptor: Not Supported 00:27:29.474 SGL Metadata Pointer: Not Supported 00:27:29.474 Oversized SGL: Not Supported 00:27:29.474 SGL Metadata Address: Not Supported 00:27:29.474 SGL Offset: Supported 00:27:29.474 Transport SGL Data Block: Not Supported 00:27:29.474 Replay Protected Memory Block: Not Supported 00:27:29.474 00:27:29.474 Firmware Slot Information 00:27:29.474 ========================= 00:27:29.474 Active slot: 0 00:27:29.474 00:27:29.474 00:27:29.474 Error Log 00:27:29.474 ========= 00:27:29.474 00:27:29.474 Active Namespaces 00:27:29.474 ================= 00:27:29.474 Discovery Log Page 00:27:29.474 ================== 00:27:29.474 Generation Counter: 2 00:27:29.474 Number of Records: 2 00:27:29.474 Record Format: 0 00:27:29.474 00:27:29.474 Discovery Log Entry 0 00:27:29.474 ---------------------- 00:27:29.474 Transport Type: 3 (TCP) 00:27:29.474 Address Family: 1 (IPv4) 00:27:29.474 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:29.474 Entry Flags: 00:27:29.474 Duplicate Returned Information: 0 00:27:29.474 Explicit Persistent Connection Support for Discovery: 0 00:27:29.474 Transport Requirements: 00:27:29.474 Secure Channel: Not Specified 00:27:29.474 Port ID: 1 (0x0001) 00:27:29.474 Controller ID: 65535 (0xffff) 00:27:29.474 Admin Max SQ Size: 32 00:27:29.474 Transport Service Identifier: 4420 00:27:29.474 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:29.474 Transport Address: 10.0.0.1 00:27:29.474 Discovery Log Entry 1 00:27:29.474 ---------------------- 00:27:29.474 Transport Type: 3 (TCP) 00:27:29.474 Address Family: 1 (IPv4) 00:27:29.474 Subsystem Type: 2 (NVM Subsystem) 00:27:29.474 Entry Flags: 00:27:29.474 Duplicate Returned Information: 0 00:27:29.474 Explicit Persistent Connection Support for Discovery: 0 00:27:29.474 Transport Requirements: 00:27:29.474 Secure Channel: Not Specified 00:27:29.474 Port ID: 1 (0x0001) 00:27:29.474 Controller ID: 65535 (0xffff) 00:27:29.474 Admin Max SQ Size: 32 00:27:29.474 Transport Service Identifier: 4420 00:27:29.474 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:29.474 Transport Address: 10.0.0.1 00:27:29.474 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:29.474 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.734 get_feature(0x01) failed 00:27:29.734 get_feature(0x02) failed 00:27:29.734 get_feature(0x04) failed 00:27:29.734 ===================================================== 00:27:29.734 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:29.734 ===================================================== 00:27:29.734 Controller Capabilities/Features 00:27:29.734 ================================ 00:27:29.734 Vendor ID: 0000 00:27:29.734 Subsystem Vendor ID: 0000 00:27:29.734 Serial Number: 32aa35d6a4e8af451818 00:27:29.734 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:29.734 Firmware Version: 6.7.0-68 00:27:29.734 Recommended Arb Burst: 6 00:27:29.734 IEEE OUI Identifier: 00 00 00 00:27:29.734 Multi-path I/O 00:27:29.734 May have multiple subsystem ports: Yes 00:27:29.734 May have multiple controllers: Yes 00:27:29.734 Associated with SR-IOV VF: No 00:27:29.734 Max Data Transfer Size: Unlimited 00:27:29.734 Max Number of Namespaces: 1024 00:27:29.734 Max Number of I/O Queues: 128 00:27:29.734 NVMe Specification Version (VS): 1.3 00:27:29.734 NVMe Specification Version (Identify): 1.3 00:27:29.734 Maximum Queue Entries: 1024 00:27:29.734 Contiguous Queues Required: No 00:27:29.734 Arbitration Mechanisms Supported 00:27:29.734 Weighted Round Robin: Not Supported 00:27:29.734 Vendor Specific: Not Supported 00:27:29.735 Reset Timeout: 7500 ms 00:27:29.735 Doorbell Stride: 4 bytes 00:27:29.735 NVM Subsystem Reset: Not Supported 00:27:29.735 Command Sets Supported 00:27:29.735 NVM Command Set: Supported 00:27:29.735 Boot Partition: Not Supported 00:27:29.735 Memory Page Size Minimum: 4096 bytes 00:27:29.735 Memory Page Size Maximum: 4096 bytes 00:27:29.735 Persistent Memory Region: Not Supported 00:27:29.735 Optional Asynchronous Events Supported 00:27:29.735 Namespace Attribute Notices: Supported 00:27:29.735 Firmware Activation Notices: Not Supported 00:27:29.735 ANA Change Notices: Supported 00:27:29.735 PLE Aggregate Log Change Notices: Not Supported 00:27:29.735 LBA Status Info Alert Notices: Not Supported 00:27:29.735 EGE Aggregate Log Change Notices: Not Supported 00:27:29.735 Normal NVM Subsystem Shutdown event: Not Supported 00:27:29.735 Zone Descriptor Change Notices: Not Supported 00:27:29.735 Discovery Log Change Notices: Not Supported 00:27:29.735 Controller Attributes 00:27:29.735 128-bit Host Identifier: Supported 00:27:29.735 Non-Operational Permissive Mode: Not Supported 00:27:29.735 NVM Sets: Not Supported 00:27:29.735 Read Recovery Levels: Not Supported 00:27:29.735 Endurance Groups: Not Supported 00:27:29.735 Predictable Latency Mode: Not Supported 00:27:29.735 Traffic Based Keep ALive: Supported 00:27:29.735 Namespace Granularity: Not Supported 00:27:29.735 SQ Associations: Not Supported 00:27:29.735 UUID List: Not Supported 00:27:29.735 Multi-Domain Subsystem: Not Supported 00:27:29.735 Fixed Capacity Management: Not Supported 00:27:29.735 Variable Capacity Management: Not Supported 00:27:29.735 Delete Endurance Group: Not Supported 00:27:29.735 Delete NVM Set: Not Supported 00:27:29.735 Extended LBA Formats Supported: Not Supported 00:27:29.735 Flexible Data Placement Supported: Not Supported 00:27:29.735 00:27:29.735 Controller Memory Buffer Support 00:27:29.735 ================================ 00:27:29.735 Supported: No 00:27:29.735 00:27:29.735 Persistent Memory Region Support 00:27:29.735 ================================ 00:27:29.735 Supported: No 00:27:29.735 00:27:29.735 Admin Command Set Attributes 00:27:29.735 ============================ 00:27:29.735 Security Send/Receive: Not Supported 00:27:29.735 Format NVM: Not Supported 00:27:29.735 Firmware Activate/Download: Not Supported 00:27:29.735 Namespace Management: Not Supported 00:27:29.735 Device Self-Test: Not Supported 00:27:29.735 Directives: Not Supported 00:27:29.735 NVMe-MI: Not Supported 00:27:29.735 Virtualization Management: Not Supported 00:27:29.735 Doorbell Buffer Config: Not Supported 00:27:29.735 Get LBA Status Capability: Not Supported 00:27:29.735 Command & Feature Lockdown Capability: Not Supported 00:27:29.735 Abort Command Limit: 4 00:27:29.735 Async Event Request Limit: 4 00:27:29.735 Number of Firmware Slots: N/A 00:27:29.735 Firmware Slot 1 Read-Only: N/A 00:27:29.735 Firmware Activation Without Reset: N/A 00:27:29.735 Multiple Update Detection Support: N/A 00:27:29.735 Firmware Update Granularity: No Information Provided 00:27:29.735 Per-Namespace SMART Log: Yes 00:27:29.735 Asymmetric Namespace Access Log Page: Supported 00:27:29.735 ANA Transition Time : 10 sec 00:27:29.735 00:27:29.735 Asymmetric Namespace Access Capabilities 00:27:29.735 ANA Optimized State : Supported 00:27:29.735 ANA Non-Optimized State : Supported 00:27:29.735 ANA Inaccessible State : Supported 00:27:29.735 ANA Persistent Loss State : Supported 00:27:29.735 ANA Change State : Supported 00:27:29.735 ANAGRPID is not changed : No 00:27:29.735 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:29.735 00:27:29.735 ANA Group Identifier Maximum : 128 00:27:29.735 Number of ANA Group Identifiers : 128 00:27:29.735 Max Number of Allowed Namespaces : 1024 00:27:29.735 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:29.735 Command Effects Log Page: Supported 00:27:29.735 Get Log Page Extended Data: Supported 00:27:29.735 Telemetry Log Pages: Not Supported 00:27:29.735 Persistent Event Log Pages: Not Supported 00:27:29.735 Supported Log Pages Log Page: May Support 00:27:29.735 Commands Supported & Effects Log Page: Not Supported 00:27:29.735 Feature Identifiers & Effects Log Page:May Support 00:27:29.735 NVMe-MI Commands & Effects Log Page: May Support 00:27:29.735 Data Area 4 for Telemetry Log: Not Supported 00:27:29.735 Error Log Page Entries Supported: 128 00:27:29.735 Keep Alive: Supported 00:27:29.735 Keep Alive Granularity: 1000 ms 00:27:29.735 00:27:29.735 NVM Command Set Attributes 00:27:29.735 ========================== 00:27:29.735 Submission Queue Entry Size 00:27:29.735 Max: 64 00:27:29.735 Min: 64 00:27:29.735 Completion Queue Entry Size 00:27:29.735 Max: 16 00:27:29.735 Min: 16 00:27:29.735 Number of Namespaces: 1024 00:27:29.735 Compare Command: Not Supported 00:27:29.735 Write Uncorrectable Command: Not Supported 00:27:29.735 Dataset Management Command: Supported 00:27:29.735 Write Zeroes Command: Supported 00:27:29.735 Set Features Save Field: Not Supported 00:27:29.735 Reservations: Not Supported 00:27:29.735 Timestamp: Not Supported 00:27:29.735 Copy: Not Supported 00:27:29.735 Volatile Write Cache: Present 00:27:29.735 Atomic Write Unit (Normal): 1 00:27:29.735 Atomic Write Unit (PFail): 1 00:27:29.735 Atomic Compare & Write Unit: 1 00:27:29.735 Fused Compare & Write: Not Supported 00:27:29.735 Scatter-Gather List 00:27:29.735 SGL Command Set: Supported 00:27:29.735 SGL Keyed: Not Supported 00:27:29.735 SGL Bit Bucket Descriptor: Not Supported 00:27:29.735 SGL Metadata Pointer: Not Supported 00:27:29.735 Oversized SGL: Not Supported 00:27:29.735 SGL Metadata Address: Not Supported 00:27:29.735 SGL Offset: Supported 00:27:29.735 Transport SGL Data Block: Not Supported 00:27:29.735 Replay Protected Memory Block: Not Supported 00:27:29.735 00:27:29.735 Firmware Slot Information 00:27:29.735 ========================= 00:27:29.735 Active slot: 0 00:27:29.735 00:27:29.735 Asymmetric Namespace Access 00:27:29.735 =========================== 00:27:29.735 Change Count : 0 00:27:29.735 Number of ANA Group Descriptors : 1 00:27:29.735 ANA Group Descriptor : 0 00:27:29.735 ANA Group ID : 1 00:27:29.735 Number of NSID Values : 1 00:27:29.735 Change Count : 0 00:27:29.735 ANA State : 1 00:27:29.735 Namespace Identifier : 1 00:27:29.735 00:27:29.735 Commands Supported and Effects 00:27:29.735 ============================== 00:27:29.735 Admin Commands 00:27:29.735 -------------- 00:27:29.735 Get Log Page (02h): Supported 00:27:29.735 Identify (06h): Supported 00:27:29.735 Abort (08h): Supported 00:27:29.735 Set Features (09h): Supported 00:27:29.735 Get Features (0Ah): Supported 00:27:29.735 Asynchronous Event Request (0Ch): Supported 00:27:29.735 Keep Alive (18h): Supported 00:27:29.735 I/O Commands 00:27:29.735 ------------ 00:27:29.735 Flush (00h): Supported 00:27:29.735 Write (01h): Supported LBA-Change 00:27:29.735 Read (02h): Supported 00:27:29.735 Write Zeroes (08h): Supported LBA-Change 00:27:29.735 Dataset Management (09h): Supported 00:27:29.735 00:27:29.735 Error Log 00:27:29.735 ========= 00:27:29.735 Entry: 0 00:27:29.735 Error Count: 0x3 00:27:29.735 Submission Queue Id: 0x0 00:27:29.735 Command Id: 0x5 00:27:29.735 Phase Bit: 0 00:27:29.735 Status Code: 0x2 00:27:29.735 Status Code Type: 0x0 00:27:29.735 Do Not Retry: 1 00:27:29.735 Error Location: 0x28 00:27:29.735 LBA: 0x0 00:27:29.735 Namespace: 0x0 00:27:29.735 Vendor Log Page: 0x0 00:27:29.735 ----------- 00:27:29.735 Entry: 1 00:27:29.735 Error Count: 0x2 00:27:29.735 Submission Queue Id: 0x0 00:27:29.735 Command Id: 0x5 00:27:29.735 Phase Bit: 0 00:27:29.735 Status Code: 0x2 00:27:29.735 Status Code Type: 0x0 00:27:29.735 Do Not Retry: 1 00:27:29.735 Error Location: 0x28 00:27:29.735 LBA: 0x0 00:27:29.735 Namespace: 0x0 00:27:29.735 Vendor Log Page: 0x0 00:27:29.735 ----------- 00:27:29.735 Entry: 2 00:27:29.735 Error Count: 0x1 00:27:29.735 Submission Queue Id: 0x0 00:27:29.735 Command Id: 0x4 00:27:29.735 Phase Bit: 0 00:27:29.735 Status Code: 0x2 00:27:29.735 Status Code Type: 0x0 00:27:29.735 Do Not Retry: 1 00:27:29.735 Error Location: 0x28 00:27:29.735 LBA: 0x0 00:27:29.735 Namespace: 0x0 00:27:29.735 Vendor Log Page: 0x0 00:27:29.735 00:27:29.735 Number of Queues 00:27:29.735 ================ 00:27:29.735 Number of I/O Submission Queues: 128 00:27:29.735 Number of I/O Completion Queues: 128 00:27:29.735 00:27:29.735 ZNS Specific Controller Data 00:27:29.735 ============================ 00:27:29.735 Zone Append Size Limit: 0 00:27:29.735 00:27:29.735 00:27:29.735 Active Namespaces 00:27:29.735 ================= 00:27:29.735 get_feature(0x05) failed 00:27:29.735 Namespace ID:1 00:27:29.735 Command Set Identifier: NVM (00h) 00:27:29.735 Deallocate: Supported 00:27:29.736 Deallocated/Unwritten Error: Not Supported 00:27:29.736 Deallocated Read Value: Unknown 00:27:29.736 Deallocate in Write Zeroes: Not Supported 00:27:29.736 Deallocated Guard Field: 0xFFFF 00:27:29.736 Flush: Supported 00:27:29.736 Reservation: Not Supported 00:27:29.736 Namespace Sharing Capabilities: Multiple Controllers 00:27:29.736 Size (in LBAs): 1953525168 (931GiB) 00:27:29.736 Capacity (in LBAs): 1953525168 (931GiB) 00:27:29.736 Utilization (in LBAs): 1953525168 (931GiB) 00:27:29.736 UUID: 447afcd8-4d9b-4900-b098-ec24d5ccfd05 00:27:29.736 Thin Provisioning: Not Supported 00:27:29.736 Per-NS Atomic Units: Yes 00:27:29.736 Atomic Boundary Size (Normal): 0 00:27:29.736 Atomic Boundary Size (PFail): 0 00:27:29.736 Atomic Boundary Offset: 0 00:27:29.736 NGUID/EUI64 Never Reused: No 00:27:29.736 ANA group ID: 1 00:27:29.736 Namespace Write Protected: No 00:27:29.736 Number of LBA Formats: 1 00:27:29.736 Current LBA Format: LBA Format #00 00:27:29.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:29.736 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.736 rmmod nvme_tcp 00:27:29.736 rmmod nvme_fabrics 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.736 11:43:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:31.640 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:31.943 11:43:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:34.503 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:34.503 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:34.762 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:35.697 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:35.697 00:27:35.697 real 0m16.367s 00:27:35.697 user 0m4.055s 00:27:35.697 sys 0m8.527s 00:27:35.697 11:43:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.697 11:43:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:35.697 ************************************ 00:27:35.697 END TEST nvmf_identify_kernel_target 00:27:35.697 ************************************ 00:27:35.697 11:43:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:35.697 11:43:10 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:35.697 11:43:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:35.697 11:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.697 11:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.697 ************************************ 00:27:35.697 START TEST nvmf_auth_host 00:27:35.697 ************************************ 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:35.697 * Looking for test storage... 00:27:35.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.697 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.698 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.956 11:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:41.226 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:41.226 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:41.226 Found net devices under 0000:af:00.0: cvl_0_0 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:41.226 Found net devices under 0000:af:00.1: cvl_0_1 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.226 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.227 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:27:41.485 00:27:41.485 --- 10.0.0.2 ping statistics --- 00:27:41.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.485 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:41.485 00:27:41.485 --- 10.0.0.1 ping statistics --- 00:27:41.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.485 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2939294 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2939294 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2939294 ']' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.485 11:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0017138e003e336df25d6271c620875f 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CXi 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0017138e003e336df25d6271c620875f 0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0017138e003e336df25d6271c620875f 0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0017138e003e336df25d6271c620875f 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CXi 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CXi 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CXi 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=95d9a7246599f1ab009931e2f3f43009fb0bd0b5cbf91add92c5704d8f6e6010 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uQj 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 95d9a7246599f1ab009931e2f3f43009fb0bd0b5cbf91add92c5704d8f6e6010 3 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 95d9a7246599f1ab009931e2f3f43009fb0bd0b5cbf91add92c5704d8f6e6010 3 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=95d9a7246599f1ab009931e2f3f43009fb0bd0b5cbf91add92c5704d8f6e6010 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uQj 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uQj 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uQj 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c35db37369f88df196cf2e36a3bd8d0d0c459c865bdbb4b 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wss 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c35db37369f88df196cf2e36a3bd8d0d0c459c865bdbb4b 0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c35db37369f88df196cf2e36a3bd8d0d0c459c865bdbb4b 0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c35db37369f88df196cf2e36a3bd8d0d0c459c865bdbb4b 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wss 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wss 00:27:42.053 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wss 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44c012abbed96596822aac190f3e6e46d7a39114a055b17c 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AgT 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44c012abbed96596822aac190f3e6e46d7a39114a055b17c 2 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44c012abbed96596822aac190f3e6e46d7a39114a055b17c 2 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44c012abbed96596822aac190f3e6e46d7a39114a055b17c 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AgT 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AgT 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.AgT 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f12b5f0c55a615a2c742803e6f9444f7 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kuU 00:27:42.312 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f12b5f0c55a615a2c742803e6f9444f7 1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f12b5f0c55a615a2c742803e6f9444f7 1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f12b5f0c55a615a2c742803e6f9444f7 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kuU 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kuU 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kuU 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=51ce2167f1a3c60e51bacd9bf8762b29 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oTY 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 51ce2167f1a3c60e51bacd9bf8762b29 1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 51ce2167f1a3c60e51bacd9bf8762b29 1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=51ce2167f1a3c60e51bacd9bf8762b29 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oTY 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oTY 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oTY 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d4662b346b1c03f24b5d3d6f9445c1b2c3c3130762cc8b5 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sMD 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d4662b346b1c03f24b5d3d6f9445c1b2c3c3130762cc8b5 2 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d4662b346b1c03f24b5d3d6f9445c1b2c3c3130762cc8b5 2 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d4662b346b1c03f24b5d3d6f9445c1b2c3c3130762cc8b5 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:42.313 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sMD 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sMD 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.sMD 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58a96d31c0ef11065ee3f9e822ccebcc 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OrS 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58a96d31c0ef11065ee3f9e822ccebcc 0 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58a96d31c0ef11065ee3f9e822ccebcc 0 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58a96d31c0ef11065ee3f9e822ccebcc 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OrS 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OrS 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OrS 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de336d12a3c6af08a032c3ece3dd11696bbff8678bd5300deb44fc1c4fefa40f 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ctC 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de336d12a3c6af08a032c3ece3dd11696bbff8678bd5300deb44fc1c4fefa40f 3 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de336d12a3c6af08a032c3ece3dd11696bbff8678bd5300deb44fc1c4fefa40f 3 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de336d12a3c6af08a032c3ece3dd11696bbff8678bd5300deb44fc1c4fefa40f 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ctC 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ctC 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ctC 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2939294 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2939294 ']' 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.572 11:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CXi 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uQj ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uQj 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wss 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.AgT ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AgT 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kuU 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oTY ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oTY 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sMD 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OrS ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OrS 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ctC 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:42.832 11:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:45.366 Waiting for block devices as requested 00:27:45.366 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:45.625 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:45.625 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:45.884 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:45.884 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:45.884 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:45.884 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:46.142 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:46.142 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:46.143 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:46.143 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:46.402 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:46.402 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:46.402 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:46.660 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:46.660 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:46.660 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:47.228 11:43:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:47.488 No valid GPT data, bailing 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:47.488 00:27:47.488 Discovery Log Number of Records 2, Generation counter 2 00:27:47.488 =====Discovery Log Entry 0====== 00:27:47.488 trtype: tcp 00:27:47.488 adrfam: ipv4 00:27:47.488 subtype: current discovery subsystem 00:27:47.488 treq: not specified, sq flow control disable supported 00:27:47.488 portid: 1 00:27:47.488 trsvcid: 4420 00:27:47.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:47.488 traddr: 10.0.0.1 00:27:47.488 eflags: none 00:27:47.488 sectype: none 00:27:47.488 =====Discovery Log Entry 1====== 00:27:47.488 trtype: tcp 00:27:47.488 adrfam: ipv4 00:27:47.488 subtype: nvme subsystem 00:27:47.488 treq: not specified, sq flow control disable supported 00:27:47.488 portid: 1 00:27:47.488 trsvcid: 4420 00:27:47.488 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:47.488 traddr: 10.0.0.1 00:27:47.488 eflags: none 00:27:47.488 sectype: none 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.488 11:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.489 11:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.489 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.489 11:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 nvme0n1 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.008 nvme0n1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.008 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.268 nvme0n1 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.268 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.269 nvme0n1 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.269 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.528 nvme0n1 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.528 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.788 11:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 nvme0n1 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.788 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.046 nvme0n1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.046 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 nvme0n1 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:49.305 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:49.306 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 nvme0n1 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 11:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.564 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.564 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:49.565 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 nvme0n1 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.823 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.083 nvme0n1 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.083 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.342 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.600 nvme0n1 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.600 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.601 11:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.859 nvme0n1 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:50.859 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.860 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 nvme0n1 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.485 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.486 nvme0n1 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.486 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.745 11:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.745 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.004 nvme0n1 00:27:52.004 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.004 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.004 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.004 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.005 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.576 nvme0n1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.576 11:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.144 nvme0n1 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.144 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.145 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.713 nvme0n1 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.713 11:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.713 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 nvme0n1 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.308 11:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.876 nvme0n1 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.876 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.877 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.811 nvme0n1 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.811 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.812 11:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.378 nvme0n1 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.378 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.637 11:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.205 nvme0n1 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.205 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.206 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.465 11:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.033 nvme0n1 00:27:58.033 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.033 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.034 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.293 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.294 11:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.294 11:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.294 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.294 11:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 nvme0n1 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.230 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.231 nvme0n1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.231 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.489 nvme0n1 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:59.489 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.490 11:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.749 nvme0n1 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.749 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.009 nvme0n1 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.009 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.267 nvme0n1 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.267 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.526 nvme0n1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.526 11:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.785 nvme0n1 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.785 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.786 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.045 nvme0n1 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.045 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 nvme0n1 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.305 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.564 nvme0n1 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:01.564 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.565 11:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.824 nvme0n1 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.824 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.083 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.342 nvme0n1 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:02.342 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.343 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.602 nvme0n1 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.602 11:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.602 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.861 nvme0n1 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.861 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.120 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.121 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 nvme0n1 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.380 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.381 11:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.950 nvme0n1 00:28:03.950 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.950 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.950 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.951 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.520 nvme0n1 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.520 11:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.089 nvme0n1 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.089 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.660 nvme0n1 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.660 11:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.661 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.661 11:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 nvme0n1 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.285 11:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.853 nvme0n1 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:06.853 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.112 11:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.679 nvme0n1 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.679 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.937 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.938 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.872 nvme0n1 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.872 11:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.872 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.436 nvme0n1 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.436 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.694 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.695 11:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.260 nvme0n1 00:28:10.260 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.260 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.260 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.260 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.260 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.519 nvme0n1 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.519 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.777 11:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.777 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.777 11:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.777 nvme0n1 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.777 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 nvme0n1 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.036 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.037 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 nvme0n1 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.296 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.297 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.557 nvme0n1 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.557 11:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.558 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.558 11:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.817 nvme0n1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.817 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.076 nvme0n1 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.076 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.077 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.336 nvme0n1 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.336 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.595 nvme0n1 00:28:12.595 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.595 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.595 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.596 11:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.596 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.855 nvme0n1 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.855 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.114 nvme0n1 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.114 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.372 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.373 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.632 nvme0n1 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.632 11:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.891 nvme0n1 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.891 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 nvme0n1 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.459 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 nvme0n1 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 11:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.719 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.287 nvme0n1 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.287 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.288 11:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.856 nvme0n1 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.856 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.423 nvme0n1 00:28:16.423 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.423 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.423 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.423 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.423 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.424 11:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.992 nvme0n1 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.992 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.993 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.560 nvme0n1 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.560 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxNzEzOGUwMDNlMzM2ZGYyNWQ2MjcxYzYyMDg3NWbHVC/y: 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTVkOWE3MjQ2NTk5ZjFhYjAwOTkzMWUyZjNmNDMwMDlmYjBiZDBiNWNiZjkxYWRkOTJjNTcwNGQ4ZjZlNjAxMG/7/KI=: 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.561 11:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 nvme0n1 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.128 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:18.386 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.387 11:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.320 nvme0n1 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.320 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjEyYjVmMGM1NWE2MTVhMmM3NDI4MDNlNmY5NDQ0Zjcu13It: 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: ]] 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTFjZTIxNjdmMWEzYzYwZTUxYmFjZDliZjg3NjJiMjnGt+Qw: 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.321 11:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.888 nvme0n1 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0NjYyYjM0NmIxYzAzZjI0YjVkM2Q2Zjk0NDVjMWIyYzNjMzEzMDc2MmNjOGI1uEBxwA==: 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: ]] 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThhOTZkMzFjMGVmMTEwNjVlZTNmOWU4MjJjY2ViY2PmsCox: 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.888 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.889 11:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.147 11:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.147 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.147 11:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.715 nvme0n1 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.715 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUzMzZkMTJhM2M2YWYwOGEwMzJjM2VjZTNkZDExNjk2YmJmZjg2NzhiZDUzMDBkZWI0NGZjMWM0ZmVmYTQwZkZnSkY=: 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.716 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 nvme0n1 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 11:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMzNWRiMzczNjlmODhkZjE5NmNmMmUzNmEzYmQ4ZDBkMGM0NTljODY1YmRiYjRihXEGtw==: 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRjMDEyYWJiZWQ5NjU5NjgyMmFhYzE5MGYzZTZlNDZkN2EzOTExNGEwNTViMTdjZCOT8g==: 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 request: 00:28:21.651 { 00:28:21.651 "name": "nvme0", 00:28:21.651 "trtype": "tcp", 00:28:21.651 "traddr": "10.0.0.1", 00:28:21.651 "adrfam": "ipv4", 00:28:21.651 "trsvcid": "4420", 00:28:21.651 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.651 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.651 "prchk_reftag": false, 00:28:21.651 "prchk_guard": false, 00:28:21.651 "hdgst": false, 00:28:21.651 "ddgst": false, 00:28:21.651 "method": "bdev_nvme_attach_controller", 00:28:21.651 "req_id": 1 00:28:21.651 } 00:28:21.651 Got JSON-RPC error response 00:28:21.651 response: 00:28:21.651 { 00:28:21.651 "code": -5, 00:28:21.651 "message": "Input/output error" 00:28:21.651 } 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 request: 00:28:21.909 { 00:28:21.909 "name": "nvme0", 00:28:21.909 "trtype": "tcp", 00:28:21.909 "traddr": "10.0.0.1", 00:28:21.909 "adrfam": "ipv4", 00:28:21.909 "trsvcid": "4420", 00:28:21.909 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.909 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.909 "prchk_reftag": false, 00:28:21.909 "prchk_guard": false, 00:28:21.909 "hdgst": false, 00:28:21.909 "ddgst": false, 00:28:21.909 "dhchap_key": "key2", 00:28:21.909 "method": "bdev_nvme_attach_controller", 00:28:21.909 "req_id": 1 00:28:21.909 } 00:28:21.909 Got JSON-RPC error response 00:28:21.909 response: 00:28:21.909 { 00:28:21.909 "code": -5, 00:28:21.909 "message": "Input/output error" 00:28:21.909 } 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.909 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.910 request: 00:28:21.910 { 00:28:21.910 "name": "nvme0", 00:28:21.910 "trtype": "tcp", 00:28:21.910 "traddr": "10.0.0.1", 00:28:21.910 "adrfam": "ipv4", 00:28:21.910 "trsvcid": "4420", 00:28:21.910 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.910 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.910 "prchk_reftag": false, 00:28:21.910 "prchk_guard": false, 00:28:21.910 "hdgst": false, 00:28:21.910 "ddgst": false, 00:28:21.910 "dhchap_key": "key1", 00:28:21.910 "dhchap_ctrlr_key": "ckey2", 00:28:21.910 "method": "bdev_nvme_attach_controller", 00:28:21.910 "req_id": 1 00:28:21.910 } 00:28:21.910 Got JSON-RPC error response 00:28:21.910 response: 00:28:21.910 { 00:28:21.910 "code": -5, 00:28:21.910 "message": "Input/output error" 00:28:21.910 } 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.910 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.910 rmmod nvme_tcp 00:28:22.168 rmmod nvme_fabrics 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2939294 ']' 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2939294 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2939294 ']' 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2939294 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2939294 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2939294' 00:28:22.168 killing process with pid 2939294 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2939294 00:28:22.168 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2939294 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.425 11:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:24.327 11:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:27.607 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:27.607 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:28.173 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:28.431 11:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CXi /tmp/spdk.key-null.wss /tmp/spdk.key-sha256.kuU /tmp/spdk.key-sha384.sMD /tmp/spdk.key-sha512.ctC /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:28.431 11:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:30.960 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:30.960 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:30.960 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:30.961 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:30.961 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:30.961 00:28:30.961 real 0m55.368s 00:28:30.961 user 0m50.136s 00:28:30.961 sys 0m12.394s 00:28:30.961 11:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.961 11:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.961 ************************************ 00:28:30.961 END TEST nvmf_auth_host 00:28:30.961 ************************************ 00:28:31.220 11:44:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:31.220 11:44:05 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:31.220 11:44:05 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.220 11:44:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:31.220 11:44:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.220 11:44:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:31.220 ************************************ 00:28:31.220 START TEST nvmf_digest 00:28:31.220 ************************************ 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.220 * Looking for test storage... 00:28:31.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:31.220 11:44:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:31.221 11:44:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:37.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.789 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:37.790 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:37.790 Found net devices under 0000:af:00.0: cvl_0_0 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:37.790 Found net devices under 0000:af:00.1: cvl_0_1 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:28:37.790 00:28:37.790 --- 10.0.0.2 ping statistics --- 00:28:37.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.790 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:28:37.790 00:28:37.790 --- 10.0.0.1 ping statistics --- 00:28:37.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.790 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.790 ************************************ 00:28:37.790 START TEST nvmf_digest_clean 00:28:37.790 ************************************ 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2953981 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2953981 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2953981 ']' 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.790 11:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.790 [2024-07-15 11:44:11.631371] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:37.790 [2024-07-15 11:44:11.631426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.790 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.790 [2024-07-15 11:44:11.717342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.790 [2024-07-15 11:44:11.806215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.790 [2024-07-15 11:44:11.806262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.790 [2024-07-15 11:44:11.806273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.790 [2024-07-15 11:44:11.806282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.790 [2024-07-15 11:44:11.806290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.790 [2024-07-15 11:44:11.806311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.373 null0 00:28:38.373 [2024-07-15 11:44:12.695196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.373 [2024-07-15 11:44:12.719387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2954257 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2954257 /var/tmp/bperf.sock 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2954257 ']' 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.373 11:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.373 [2024-07-15 11:44:12.774897] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:38.373 [2024-07-15 11:44:12.774952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954257 ] 00:28:38.373 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.631 [2024-07-15 11:44:12.855723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.631 [2024-07-15 11:44:12.959265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.567 11:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.567 11:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:39.567 11:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:39.567 11:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.567 11:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:39.825 11:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.825 11:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.084 nvme0n1 00:28:40.084 11:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.084 11:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.084 Running I/O for 2 seconds... 00:28:42.616 00:28:42.617 Latency(us) 00:28:42.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:42.617 nvme0n1 : 2.01 14987.83 58.55 0.00 0.00 8529.14 5242.88 18230.92 00:28:42.617 =================================================================================================================== 00:28:42.617 Total : 14987.83 58.55 0.00 0.00 8529.14 5242.88 18230.92 00:28:42.617 0 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.617 | select(.opcode=="crc32c") 00:28:42.617 | "\(.module_name) \(.executed)"' 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2954257 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2954257 ']' 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2954257 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2954257 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2954257' 00:28:42.617 killing process with pid 2954257 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2954257 00:28:42.617 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.617 00:28:42.617 Latency(us) 00:28:42.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.617 =================================================================================================================== 00:28:42.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.617 11:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2954257 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2955026 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2955026 /var/tmp/bperf.sock 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2955026 ']' 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.617 11:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.891 [2024-07-15 11:44:17.095639] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:42.891 [2024-07-15 11:44:17.095700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955026 ] 00:28:42.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.891 Zero copy mechanism will not be used. 00:28:42.891 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.891 [2024-07-15 11:44:17.176201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.891 [2024-07-15 11:44:17.280553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.889 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.889 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:43.889 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:43.889 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:43.889 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.145 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.145 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.402 nvme0n1 00:28:44.402 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:44.402 11:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.660 Zero copy mechanism will not be used. 00:28:44.660 Running I/O for 2 seconds... 00:28:46.561 00:28:46.561 Latency(us) 00:28:46.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.561 nvme0n1 : 2.00 3872.54 484.07 0.00 0.00 4126.36 837.82 10128.29 00:28:46.561 =================================================================================================================== 00:28:46.561 Total : 3872.54 484.07 0.00 0.00 4126.36 837.82 10128.29 00:28:46.561 0 00:28:46.562 11:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:46.562 11:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:46.562 11:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.562 11:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.562 11:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.562 | select(.opcode=="crc32c") 00:28:46.562 | "\(.module_name) \(.executed)"' 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2955026 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2955026 ']' 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2955026 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2955026 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2955026' 00:28:46.821 killing process with pid 2955026 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2955026 00:28:46.821 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.821 00:28:46.821 Latency(us) 00:28:46.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.821 =================================================================================================================== 00:28:46.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.821 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2955026 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2955768 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2955768 /var/tmp/bperf.sock 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:47.080 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2955768 ']' 00:28:47.081 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.081 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.081 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.081 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.081 11:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:47.340 [2024-07-15 11:44:21.547828] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:47.340 [2024-07-15 11:44:21.547888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955768 ] 00:28:47.340 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.340 [2024-07-15 11:44:21.631236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.340 [2024-07-15 11:44:21.738263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.277 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.277 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:48.277 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:48.277 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:48.277 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:48.536 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.536 11:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.795 nvme0n1 00:28:48.795 11:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.795 11:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.054 Running I/O for 2 seconds... 00:28:50.958 00:28:50.958 Latency(us) 00:28:50.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.958 nvme0n1 : 2.01 17872.76 69.82 0.00 0.00 7143.30 6494.02 16324.42 00:28:50.958 =================================================================================================================== 00:28:50.958 Total : 17872.76 69.82 0.00 0.00 7143.30 6494.02 16324.42 00:28:50.958 0 00:28:50.958 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:50.958 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:50.958 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:50.958 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:50.958 | select(.opcode=="crc32c") 00:28:50.958 | "\(.module_name) \(.executed)"' 00:28:50.958 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2955768 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2955768 ']' 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2955768 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2955768 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2955768' 00:28:51.217 killing process with pid 2955768 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2955768 00:28:51.217 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.217 00:28:51.217 Latency(us) 00:28:51.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.217 =================================================================================================================== 00:28:51.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.217 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2955768 00:28:51.476 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2956456 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2956456 /var/tmp/bperf.sock 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2956456 ']' 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.477 11:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:51.477 [2024-07-15 11:44:25.902961] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:51.477 [2024-07-15 11:44:25.903028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956456 ] 00:28:51.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.477 Zero copy mechanism will not be used. 00:28:51.477 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.736 [2024-07-15 11:44:25.985599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.736 [2024-07-15 11:44:26.084599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.673 11:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.673 11:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:52.673 11:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:52.673 11:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:52.673 11:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:52.932 11:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.932 11:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.191 nvme0n1 00:28:53.191 11:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:53.191 11:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.191 Zero copy mechanism will not be used. 00:28:53.191 Running I/O for 2 seconds... 00:28:55.724 00:28:55.724 Latency(us) 00:28:55.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.724 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:55.724 nvme0n1 : 2.00 5352.39 669.05 0.00 0.00 2981.62 2308.65 8817.57 00:28:55.724 =================================================================================================================== 00:28:55.724 Total : 5352.39 669.05 0.00 0.00 2981.62 2308.65 8817.57 00:28:55.724 0 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:55.724 | select(.opcode=="crc32c") 00:28:55.724 | "\(.module_name) \(.executed)"' 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2956456 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2956456 ']' 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2956456 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.724 11:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2956456 00:28:55.724 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:55.724 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:55.724 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2956456' 00:28:55.724 killing process with pid 2956456 00:28:55.724 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2956456 00:28:55.724 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.724 00:28:55.724 Latency(us) 00:28:55.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.724 =================================================================================================================== 00:28:55.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.724 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2956456 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2953981 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2953981 ']' 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2953981 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2953981 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2953981' 00:28:55.983 killing process with pid 2953981 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2953981 00:28:55.983 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2953981 00:28:56.242 00:28:56.242 real 0m18.920s 00:28:56.242 user 0m38.027s 00:28:56.242 sys 0m4.462s 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.242 ************************************ 00:28:56.242 END TEST nvmf_digest_clean 00:28:56.242 ************************************ 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:56.242 ************************************ 00:28:56.242 START TEST nvmf_digest_error 00:28:56.242 ************************************ 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2957347 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2957347 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2957347 ']' 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.242 11:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.242 [2024-07-15 11:44:30.616427] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:56.242 [2024-07-15 11:44:30.616478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.242 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.242 [2024-07-15 11:44:30.701281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.501 [2024-07-15 11:44:30.793443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.501 [2024-07-15 11:44:30.793485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.501 [2024-07-15 11:44:30.793495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.501 [2024-07-15 11:44:30.793504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.501 [2024-07-15 11:44:30.793512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.501 [2024-07-15 11:44:30.793534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.760 [2024-07-15 11:44:31.114682] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.760 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.760 null0 00:28:56.760 [2024-07-15 11:44:31.209323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.019 [2024-07-15 11:44:31.233510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2957502 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2957502 /var/tmp/bperf.sock 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2957502 ']' 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.019 11:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.019 [2024-07-15 11:44:31.289680] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:28:57.019 [2024-07-15 11:44:31.289733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957502 ] 00:28:57.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.019 [2024-07-15 11:44:31.370369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.019 [2024-07-15 11:44:31.473872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.957 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.957 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:57.957 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.957 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.216 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.474 nvme0n1 00:28:58.474 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:58.474 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.474 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.734 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.734 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:58.734 11:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.734 Running I/O for 2 seconds... 00:28:58.734 [2024-07-15 11:44:33.079030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.079099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.101219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.101282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.120763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.120799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.120815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.139854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.139889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.139905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.158918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.158952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.178574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.178609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.178624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.734 [2024-07-15 11:44:33.193424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.734 [2024-07-15 11:44:33.193458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.734 [2024-07-15 11:44:33.193474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.212111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.212146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.212163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.227959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.227994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.249760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.249794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.249815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.263669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.263704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.263720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.283343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.283377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.283391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.302493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.302527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.302543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.317475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.317509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.317525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.334841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.334875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.334890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.353840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.353890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.368137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.368171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.993 [2024-07-15 11:44:33.368187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.993 [2024-07-15 11:44:33.382436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.993 [2024-07-15 11:44:33.382471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.994 [2024-07-15 11:44:33.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.994 [2024-07-15 11:44:33.400734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.994 [2024-07-15 11:44:33.400774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.994 [2024-07-15 11:44:33.400791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.994 [2024-07-15 11:44:33.417141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.994 [2024-07-15 11:44:33.417176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.994 [2024-07-15 11:44:33.417192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.994 [2024-07-15 11:44:33.432094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.994 [2024-07-15 11:44:33.432129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.994 [2024-07-15 11:44:33.432144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.994 [2024-07-15 11:44:33.449373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:58.994 [2024-07-15 11:44:33.449407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.994 [2024-07-15 11:44:33.449421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.464558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.464592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.483725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.483776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.504459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.504494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.504509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.525449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.525483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.525500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.545895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.545929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.545945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.561414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.561449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.583043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.583078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.583094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.601563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.601597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.601613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.253 [2024-07-15 11:44:33.616791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.253 [2024-07-15 11:44:33.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.253 [2024-07-15 11:44:33.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.254 [2024-07-15 11:44:33.636662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.254 [2024-07-15 11:44:33.636696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.254 [2024-07-15 11:44:33.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.254 [2024-07-15 11:44:33.652275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.254 [2024-07-15 11:44:33.652309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.254 [2024-07-15 11:44:33.652324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.254 [2024-07-15 11:44:33.673979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.254 [2024-07-15 11:44:33.674012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.254 [2024-07-15 11:44:33.674028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.254 [2024-07-15 11:44:33.692476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.254 [2024-07-15 11:44:33.692509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.254 [2024-07-15 11:44:33.692525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.254 [2024-07-15 11:44:33.708736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.254 [2024-07-15 11:44:33.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.254 [2024-07-15 11:44:33.708792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.728846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.728881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.728897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.747748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.747782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.747798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.767430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.767478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.783158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.783191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.783208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.800735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.800769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.800785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.819860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.819894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.819910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.834807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.834840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.834856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.848709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.848743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.848758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.863561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.863609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.877771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.877805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.877820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.894125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.894157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.894172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.909149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.909182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.909198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.928575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.928608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.928623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.948356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.948389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.948404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.513 [2024-07-15 11:44:33.963710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.513 [2024-07-15 11:44:33.963743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.513 [2024-07-15 11:44:33.963758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:33.983792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:33.983825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:33.983841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.003458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:34.003493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:34.003515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.019575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:34.019608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:34.019624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.038644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:34.038676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:34.038691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.053565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:34.053598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:34.053613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.068188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.773 [2024-07-15 11:44:34.068221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.773 [2024-07-15 11:44:34.068236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.773 [2024-07-15 11:44:34.083180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.083213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.083229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.104454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.104487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.104503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.118525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.118557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.118572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.139649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.139683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.139698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.160429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.160467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.160482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.175189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.175221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.175236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.194730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.194764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.194780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.209215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.209270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.774 [2024-07-15 11:44:34.230302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:28:59.774 [2024-07-15 11:44:34.230336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.774 [2024-07-15 11:44:34.230352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.249117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.249152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.249167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.264981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.265017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.265032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.280606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.280639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.280655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.300948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.300983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.300998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.318749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.318783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.318800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.334271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.334304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.334319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.353774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.353807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.353822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.369572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.369605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.369620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.392622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.392655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.392670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.414327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.414361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.414376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.428777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.428810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.428825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.449690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.449721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.034 [2024-07-15 11:44:34.449737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.034 [2024-07-15 11:44:34.464151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.034 [2024-07-15 11:44:34.464183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.035 [2024-07-15 11:44:34.464204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.035 [2024-07-15 11:44:34.484665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.035 [2024-07-15 11:44:34.484699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.035 [2024-07-15 11:44:34.484714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.506368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.506403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.506422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.520938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.520971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.520987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.541170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.541203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.541218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.559540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.559574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.559590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.575537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.575570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.575585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.595988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.596021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.596037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.611097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.611132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.611147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.632664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.632698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.632714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.647502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.647536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.647552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.668491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.668524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.668539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.689548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.689580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.294 [2024-07-15 11:44:34.689597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.294 [2024-07-15 11:44:34.705136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.294 [2024-07-15 11:44:34.705168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.295 [2024-07-15 11:44:34.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.295 [2024-07-15 11:44:34.725007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.295 [2024-07-15 11:44:34.725040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.295 [2024-07-15 11:44:34.725055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.295 [2024-07-15 11:44:34.746119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.295 [2024-07-15 11:44:34.746152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.295 [2024-07-15 11:44:34.746168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.764664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.764698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.764713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.780050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.780107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.799160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.799193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.799208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.818071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.818105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.818120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.833961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.833994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.834009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.853871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.853920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.875541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.875575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.875591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.896945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.896980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.896995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.918087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.918120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.918136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.939831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.939867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.939883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.953602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.953640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.953656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.973613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.973646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.973662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:34.994683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:34.994718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:34.994733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.554 [2024-07-15 11:44:35.007647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.554 [2024-07-15 11:44:35.007681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.554 [2024-07-15 11:44:35.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.813 [2024-07-15 11:44:35.028759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.813 [2024-07-15 11:44:35.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.813 [2024-07-15 11:44:35.028808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.813 [2024-07-15 11:44:35.043153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.813 [2024-07-15 11:44:35.043186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.813 [2024-07-15 11:44:35.043201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.813 [2024-07-15 11:44:35.060837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f5580) 00:29:00.814 [2024-07-15 11:44:35.060872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.814 [2024-07-15 11:44:35.060887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.814 00:29:00.814 Latency(us) 00:29:00.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.814 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:00.814 nvme0n1 : 2.01 14178.81 55.39 0.00 0.00 9012.16 4944.99 30027.40 00:29:00.814 =================================================================================================================== 00:29:00.814 Total : 14178.81 55.39 0.00 0.00 9012.16 4944.99 30027.40 00:29:00.814 0 00:29:00.814 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:00.814 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:00.814 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:00.814 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:00.814 | .driver_specific 00:29:00.814 | .nvme_error 00:29:00.814 | .status_code 00:29:00.814 | .command_transient_transport_error' 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2957502 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2957502 ']' 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2957502 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2957502 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2957502' 00:29:01.073 killing process with pid 2957502 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2957502 00:29:01.073 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.073 00:29:01.073 Latency(us) 00:29:01.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.073 =================================================================================================================== 00:29:01.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.073 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2957502 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2958296 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2958296 /var/tmp/bperf.sock 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2958296 ']' 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.332 11:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.332 [2024-07-15 11:44:35.663884] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:01.332 [2024-07-15 11:44:35.663943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958296 ] 00:29:01.332 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.332 Zero copy mechanism will not be used. 00:29:01.332 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.332 [2024-07-15 11:44:35.744029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.591 [2024-07-15 11:44:35.847753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.160 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.160 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:02.160 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.160 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.419 11:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.986 nvme0n1 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:02.986 11:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.246 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.246 Zero copy mechanism will not be used. 00:29:03.246 Running I/O for 2 seconds... 00:29:03.246 [2024-07-15 11:44:37.463493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.463543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.463561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.471079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.471122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.471140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.479339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.479375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.487500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.487536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.487551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.495574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.495608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.503544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.503578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.503592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.511401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.511435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.511450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.519439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.519472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.246 [2024-07-15 11:44:37.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.246 [2024-07-15 11:44:37.527885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.246 [2024-07-15 11:44:37.527921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.536089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.536124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.536139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.544149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.544184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.544199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.552081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.552121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.552136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.560368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.560402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.560417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.568672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.568706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.568721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.576828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.576863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.576877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.585012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.585047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.593371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.593405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.593421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.601613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.601647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.601662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.609362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.609410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.617539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.617573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.617588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.625820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.625855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.625870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.634049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.634083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.634098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.642200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.642233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.642248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.649812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.649847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.649861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.657640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.657673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.657688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.665414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.665449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.665464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.673444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.673479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.673495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.681621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.681656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.681670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.689523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.689558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.689579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.697403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.697439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.697454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.247 [2024-07-15 11:44:37.705209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.247 [2024-07-15 11:44:37.705244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.247 [2024-07-15 11:44:37.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.507 [2024-07-15 11:44:37.712959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.712993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.713008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.717541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.717575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.717591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.725730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.725764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.725779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.733861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.733894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.741646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.741679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.741695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.749715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.749748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.749763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.757114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.757155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.757170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.765044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.773390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.773425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.781559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.781593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.781609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.789722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.789757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.789773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.798047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.798082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.798097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.806298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.806331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.806346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.814361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.814395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.814409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.822343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.822392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.830471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.830504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.830519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.838199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.838234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.838248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.846117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.846151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.846166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.853555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.853589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.853604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.861351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.861384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.861399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.869037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.869071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.869086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.876754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.876788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.876803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.884533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.884566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.884581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.892950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.892985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.893005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.901113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.901147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.901162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.908989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.909022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.909037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.917474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.917509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.917524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.926099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.926135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.926150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.934338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.508 [2024-07-15 11:44:37.934373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.508 [2024-07-15 11:44:37.934388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.508 [2024-07-15 11:44:37.942641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.509 [2024-07-15 11:44:37.942675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.509 [2024-07-15 11:44:37.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.509 [2024-07-15 11:44:37.950563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.509 [2024-07-15 11:44:37.950598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.509 [2024-07-15 11:44:37.950614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.509 [2024-07-15 11:44:37.958528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.509 [2024-07-15 11:44:37.958563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.509 [2024-07-15 11:44:37.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.509 [2024-07-15 11:44:37.966749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.509 [2024-07-15 11:44:37.966791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.509 [2024-07-15 11:44:37.966806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:37.974568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:37.974605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:37.974621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:37.982693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:37.982728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:37.982744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:37.990727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:37.990761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:37.990777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:37.998832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:37.998865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:37.998881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.006952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.006986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.007001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.012087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.012120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.012134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.018163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.018198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.018213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.025597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.025631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.025646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.032920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.032956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.032971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.040963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.040998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.041013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.050651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.050688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.050704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.060762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.060798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.060814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.070658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.070698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.070714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.079792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.079829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.089733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.089770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.089786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.099720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.099757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.099774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.109550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.109586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.769 [2024-07-15 11:44:38.109608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.769 [2024-07-15 11:44:38.118416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.769 [2024-07-15 11:44:38.118452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.124703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.124740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.124755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.133868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.133902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.133918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.142223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.142265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.142281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.150174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.150208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.150224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.158085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.158118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.158133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.165959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.165992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.166008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.173834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.173868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.173883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.181673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.181712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.181727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.189659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.189692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.189708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.197702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.197736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.205876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.205910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.205925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.214228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.214270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.214285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.222378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.222412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.222427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.770 [2024-07-15 11:44:38.230922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:03.770 [2024-07-15 11:44:38.230956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.770 [2024-07-15 11:44:38.230971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.239663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.239697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.239713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.248852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.248889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.248905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.257973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.258008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.258024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.266907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.266942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.266958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.276212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.276247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.276272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.285425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.285461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.294671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.294705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.303128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.303179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.311426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.311475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.320038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.320073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.329288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.329323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.338412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.338447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.338462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.348309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.348345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.348361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.358094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.358129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.358145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.366315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.030 [2024-07-15 11:44:38.366350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.030 [2024-07-15 11:44:38.366365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.030 [2024-07-15 11:44:38.374442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.374477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.374492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.382176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.382210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.382225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.390126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.390160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.390175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.398413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.398447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.398462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.406766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.406801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.406817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.415147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.415183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.415199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.423319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.423353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.423368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.431768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.431802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.431817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.440031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.440066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.448400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.448436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.448451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.456859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.456893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.456908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.465354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.465388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.465403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.473748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.473783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.473804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.482181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.482217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.482232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.031 [2024-07-15 11:44:38.490728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.031 [2024-07-15 11:44:38.490764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.031 [2024-07-15 11:44:38.490781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.499048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.499098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.507179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.507212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.507228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.515285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.515317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.515332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.523618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.523653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.531928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.531962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.531976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.540593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.540628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.540642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.548935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.548974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.548989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.557519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.557554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.557569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.565893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.565928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.565943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.574027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.574061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.574076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.581943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.581976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.581991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.589984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.590032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.598589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.598624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.598639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.607090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.607124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.607139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.615853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.615888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.615903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.624104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.624139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.632592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.632628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.632643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.641233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.641277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.291 [2024-07-15 11:44:38.641293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.291 [2024-07-15 11:44:38.649747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.291 [2024-07-15 11:44:38.649781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.649797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.658030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.658065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.658080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.666086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.666120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.666135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.674369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.674403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.682509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.682543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.682558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.690784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.690818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.690838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.699070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.699121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.707360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.707394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.707409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.715734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.715769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.715784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.724499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.724534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.724550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.733078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.733113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.733128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.741382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.741417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.741432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.292 [2024-07-15 11:44:38.749833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.292 [2024-07-15 11:44:38.749867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.292 [2024-07-15 11:44:38.749882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.758719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.758756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.758771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.766983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.767023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.767038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.775065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.775099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.783123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.783157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.783171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.791381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.791414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.791429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.799421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.799455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.799470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.807807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.807840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.807855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.816017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.816056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.816072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.825087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.825123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.825139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.833646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.833682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.833697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.842615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.842651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.842667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.850687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.850722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.858902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.858936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.858952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.867221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.867263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.867278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.875463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.875496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.875511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.883811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.883844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.883859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.892174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.892221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.900573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.900609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.900624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.909352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.909387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.909408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.917977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.918013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.552 [2024-07-15 11:44:38.918028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.552 [2024-07-15 11:44:38.926164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.552 [2024-07-15 11:44:38.926199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.926214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.934404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.934437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.934452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.942697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.942731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.942746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.951053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.959283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.959316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.959331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.967429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.967463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.975770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.975805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.975820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.984371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.984409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.984424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:38.992294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:38.992328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:38.992343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:39.000613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:39.000647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:39.000662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.553 [2024-07-15 11:44:39.008774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.553 [2024-07-15 11:44:39.008809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-07-15 11:44:39.008824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.017016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.017051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.017066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.025198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.025232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.025247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.033731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.033765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.042108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.042156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.050372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.050403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.050419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.812 [2024-07-15 11:44:39.058762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.812 [2024-07-15 11:44:39.058794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.812 [2024-07-15 11:44:39.058809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.067905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.067940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.067955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.076156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.076191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.076206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.084196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.084231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.084246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.092518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.092552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.092567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.100847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.100882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.100897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.109048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.109082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.109097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.117086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.117120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.117135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.125366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.125404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.125420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.133916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.133950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.133966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.142345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.142379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.150562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.150597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.158887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.158922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.158937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.167348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.167382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.167397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.175535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.175570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.175584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.183759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.183793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.183808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.191848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.191882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.191897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.200040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.200074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.200089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.208131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.208165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.208180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.216064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.224217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.224250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.224273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.232535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.232571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.232587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.241011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.241045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.241060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.249353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.249387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.249402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.257593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.257629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.257643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.265926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.265961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.265980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.813 [2024-07-15 11:44:39.274075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:04.813 [2024-07-15 11:44:39.274108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.813 [2024-07-15 11:44:39.274123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.282055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.282090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.282105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.290656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.290704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.299029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.299062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.299076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.306683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.306716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.306731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.314741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.314776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.314791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.322851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.322886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.322902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.331000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.331034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.339104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.339144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.339159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.346869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.346903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.346918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.355754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.355789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.364464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.364499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.373376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.373411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.373427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.382345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.388576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.388609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.388624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.397691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.397726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.397742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.406937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.406973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.406988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.415621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.415656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.415671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.423040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.423076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.423092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.431387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.431423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.431439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.439544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.439579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.439594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.447747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.447781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.447797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.073 [2024-07-15 11:44:39.455791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b7490) 00:29:05.073 [2024-07-15 11:44:39.455826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.073 [2024-07-15 11:44:39.455841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.073 00:29:05.073 Latency(us) 00:29:05.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:05.073 nvme0n1 : 2.00 3762.80 470.35 0.00 0.00 4246.64 1094.75 10902.81 00:29:05.073 =================================================================================================================== 00:29:05.073 Total : 3762.80 470.35 0.00 0.00 4246.64 1094.75 10902.81 00:29:05.073 0 00:29:05.073 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:05.073 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:05.073 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:05.073 | .driver_specific 00:29:05.073 | .nvme_error 00:29:05.073 | .status_code 00:29:05.073 | .command_transient_transport_error' 00:29:05.073 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2958296 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2958296 ']' 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2958296 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2958296 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2958296' 00:29:05.332 killing process with pid 2958296 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2958296 00:29:05.332 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.332 00:29:05.332 Latency(us) 00:29:05.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.332 =================================================================================================================== 00:29:05.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.332 11:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2958296 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959091 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959091 /var/tmp/bperf.sock 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2959091 ']' 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.591 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.848 [2024-07-15 11:44:40.057354] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:05.848 [2024-07-15 11:44:40.057417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959091 ] 00:29:05.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.848 [2024-07-15 11:44:40.140577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.848 [2024-07-15 11:44:40.244061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.781 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.781 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:06.781 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.781 11:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.781 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:06.781 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.782 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.782 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.782 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.782 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.349 nvme0n1 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.349 11:44:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.349 Running I/O for 2 seconds... 00:29:07.349 [2024-07-15 11:44:41.678367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ed920 00:29:07.349 [2024-07-15 11:44:41.679516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.692858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fa7d8 00:29:07.349 [2024-07-15 11:44:41.694631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.694666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.708680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190df988 00:29:07.349 [2024-07-15 11:44:41.710630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.710662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.720210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e38d0 00:29:07.349 [2024-07-15 11:44:41.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.721340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.735928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f96f8 00:29:07.349 [2024-07-15 11:44:41.737720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.737753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.749277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ee5c8 00:29:07.349 [2024-07-15 11:44:41.750536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.764974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.349 [2024-07-15 11:44:41.766413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.766445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.778741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.349 [2024-07-15 11:44:41.780175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.780204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.792564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.349 [2024-07-15 11:44:41.793997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.794027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.349 [2024-07-15 11:44:41.806352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.349 [2024-07-15 11:44:41.807788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.349 [2024-07-15 11:44:41.807819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.820121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.821567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.821597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.833967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.835412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.835442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.847746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.849175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.849210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.861561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.862996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.863026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.875387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.876821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.889127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.890566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.890597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.902921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.904386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.916700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.918165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.918196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.930501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.931938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.944325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.945759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.958078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.959517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.959547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.971814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.973251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.973287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.985653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:41.987091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:41.987120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:41.999392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:42.000824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.607 [2024-07-15 11:44:42.000853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.607 [2024-07-15 11:44:42.013161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.607 [2024-07-15 11:44:42.014602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.608 [2024-07-15 11:44:42.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.608 [2024-07-15 11:44:42.026955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.608 [2024-07-15 11:44:42.028395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.608 [2024-07-15 11:44:42.028426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.608 [2024-07-15 11:44:42.040693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.608 [2024-07-15 11:44:42.042124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.608 [2024-07-15 11:44:42.042153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.608 [2024-07-15 11:44:42.054500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.608 [2024-07-15 11:44:42.055933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.608 [2024-07-15 11:44:42.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.608 [2024-07-15 11:44:42.068290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.608 [2024-07-15 11:44:42.069722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.608 [2024-07-15 11:44:42.069752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.866 [2024-07-15 11:44:42.082029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.866 [2024-07-15 11:44:42.083479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.866 [2024-07-15 11:44:42.083509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.866 [2024-07-15 11:44:42.095859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.866 [2024-07-15 11:44:42.097288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.097318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.109594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.111029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.111058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.123351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.124785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.124814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.137209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.138650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.138679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.150962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.152400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.152429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.164740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.166174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.178541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.180009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.180039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.192278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.193715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.193744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.206089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.207524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.207557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.219863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.221296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.221326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.233615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.235046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.235076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.247428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.248889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.261167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.262600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.262629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.274925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.276367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.276396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.288739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.290175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.290205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.302481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.303912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.303941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.316271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:07.867 [2024-07-15 11:44:42.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.867 [2024-07-15 11:44:42.317730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.867 [2024-07-15 11:44:42.330044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.331488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.331518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.343827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.345268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.345297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.357644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.359077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.359106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.371404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.372837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.372867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.385159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.386600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.386629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.398988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.400455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.412718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.414152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.414181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.426495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.427927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.427957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.440288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.441729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.441759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.126 [2024-07-15 11:44:42.454022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.126 [2024-07-15 11:44:42.455459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.126 [2024-07-15 11:44:42.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.467825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.469264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.469293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.481846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.483282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.495574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.497008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.497038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.509418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.510879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.523173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.524626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.524656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.536955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1710 00:29:08.127 [2024-07-15 11:44:42.538480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.538509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.549792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fc128 00:29:08.127 [2024-07-15 11:44:42.551558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.551588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.562923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f0350 00:29:08.127 [2024-07-15 11:44:42.563903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.563939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:08.127 [2024-07-15 11:44:42.579145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fc998 00:29:08.127 [2024-07-15 11:44:42.581024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.127 [2024-07-15 11:44:42.581054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.592371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f0350 00:29:08.386 [2024-07-15 11:44:42.593601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.593631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.606523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e7c50 00:29:08.386 [2024-07-15 11:44:42.608374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.608404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.620778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e6fa8 00:29:08.386 [2024-07-15 11:44:42.622280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.622309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.634961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f3e60 00:29:08.386 [2024-07-15 11:44:42.637028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.637058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.649513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f57b0 00:29:08.386 [2024-07-15 11:44:42.651172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.651201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.662608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190de038 00:29:08.386 [2024-07-15 11:44:42.664279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.664309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.675841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f0bc0 00:29:08.386 [2024-07-15 11:44:42.676899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.676929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.691241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f0bc0 00:29:08.386 [2024-07-15 11:44:42.693011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.693041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.704421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f92c0 00:29:08.386 [2024-07-15 11:44:42.705588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.705619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.718464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ddc00 00:29:08.386 [2024-07-15 11:44:42.720247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.720285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.734116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f1ca0 00:29:08.386 [2024-07-15 11:44:42.736095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.736124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.745562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ed0b0 00:29:08.386 [2024-07-15 11:44:42.746765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.746793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.761453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ea680 00:29:08.386 [2024-07-15 11:44:42.763113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.386 [2024-07-15 11:44:42.763145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.386 [2024-07-15 11:44:42.777476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e5a90 00:29:08.387 [2024-07-15 11:44:42.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:08.387 [2024-07-15 11:44:42.790711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ea680 00:29:08.387 [2024-07-15 11:44:42.792176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.792206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:08.387 [2024-07-15 11:44:42.803322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fdeb0 00:29:08.387 [2024-07-15 11:44:42.805096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:08.387 [2024-07-15 11:44:42.816519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ebfd0 00:29:08.387 [2024-07-15 11:44:42.817511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.817540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:08.387 [2024-07-15 11:44:42.831214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190efae0 00:29:08.387 [2024-07-15 11:44:42.832401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.832431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.387 [2024-07-15 11:44:42.844834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fe720 00:29:08.387 [2024-07-15 11:44:42.846404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.387 [2024-07-15 11:44:42.846434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.860584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f0350 00:29:08.646 [2024-07-15 11:44:42.862279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.862309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.873807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fe720 00:29:08.646 [2024-07-15 11:44:42.874840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.874871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.887878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f4f40 00:29:08.646 [2024-07-15 11:44:42.889564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.889596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.902404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fd208 00:29:08.646 [2024-07-15 11:44:42.903692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.903721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.916705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fbcf0 00:29:08.646 [2024-07-15 11:44:42.918561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.918591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.932792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f31b8 00:29:08.646 [2024-07-15 11:44:42.934973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.935007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.945992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fbcf0 00:29:08.646 [2024-07-15 11:44:42.947578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.947609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.958682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e8d30 00:29:08.646 [2024-07-15 11:44:42.960318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.960347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.972835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e0a68 00:29:08.646 [2024-07-15 11:44:42.974675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.974706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:42.985977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f8e88 00:29:08.646 [2024-07-15 11:44:42.987057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:42.987087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:43.002208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e12d8 00:29:08.646 [2024-07-15 11:44:43.004192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:43.004222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:08.646 [2024-07-15 11:44:43.014823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f8e88 00:29:08.646 [2024-07-15 11:44:43.016302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.646 [2024-07-15 11:44:43.016332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.029027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190dece0 00:29:08.647 [2024-07-15 11:44:43.030363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.043536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190de038 00:29:08.647 [2024-07-15 11:44:43.045127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.045157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.057277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190feb58 00:29:08.647 [2024-07-15 11:44:43.058878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.070005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e5ec8 00:29:08.647 [2024-07-15 11:44:43.071771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.071801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.083147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e8d30 00:29:08.647 [2024-07-15 11:44:43.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.084201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:08.647 [2024-07-15 11:44:43.097787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e12d8 00:29:08.647 [2024-07-15 11:44:43.098954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.647 [2024-07-15 11:44:43.098984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.110670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f9f68 00:29:08.906 [2024-07-15 11:44:43.111830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.111860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.126437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e95a0 00:29:08.906 [2024-07-15 11:44:43.127787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.127816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.141066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e0630 00:29:08.906 [2024-07-15 11:44:43.142603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.142633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.153998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190edd58 00:29:08.906 [2024-07-15 11:44:43.155507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.155536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.167186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f8618 00:29:08.906 [2024-07-15 11:44:43.168136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.168166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.181307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e88f8 00:29:08.906 [2024-07-15 11:44:43.182825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.182855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.196998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f7da8 00:29:08.906 [2024-07-15 11:44:43.198633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.198662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.210143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e88f8 00:29:08.906 [2024-07-15 11:44:43.211178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.211208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.224297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190eee38 00:29:08.906 [2024-07-15 11:44:43.225932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.225962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.240286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:08.906 [2024-07-15 11:44:43.242295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.242324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.253461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190eee38 00:29:08.906 [2024-07-15 11:44:43.254845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.254874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.267632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f6020 00:29:08.906 [2024-07-15 11:44:43.269622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.906 [2024-07-15 11:44:43.269652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.906 [2024-07-15 11:44:43.283567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190de038 00:29:08.906 [2024-07-15 11:44:43.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.285979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.293882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fac10 00:29:08.907 [2024-07-15 11:44:43.294949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.294978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.306817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f8e88 00:29:08.907 [2024-07-15 11:44:43.307854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.307883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.322559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fc998 00:29:08.907 [2024-07-15 11:44:43.323799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.323828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.337214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f92c0 00:29:08.907 [2024-07-15 11:44:43.338630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.338659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.350147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e9e10 00:29:08.907 [2024-07-15 11:44:43.351539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.351568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:08.907 [2024-07-15 11:44:43.365857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e4578 00:29:08.907 [2024-07-15 11:44:43.367469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.907 [2024-07-15 11:44:43.367499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.381741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f9f68 00:29:09.165 [2024-07-15 11:44:43.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.384102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.392058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e99d8 00:29:09.165 [2024-07-15 11:44:43.393086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.393115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.404942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f9f68 00:29:09.165 [2024-07-15 11:44:43.405925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.405954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.420744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e1b48 00:29:09.165 [2024-07-15 11:44:43.421974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.422008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.435484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fd208 00:29:09.165 [2024-07-15 11:44:43.436834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.436863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.448364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fb8b8 00:29:09.165 [2024-07-15 11:44:43.449644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.449673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.463904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190df118 00:29:09.165 [2024-07-15 11:44:43.465840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.465870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.478588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190eff18 00:29:09.165 [2024-07-15 11:44:43.480093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.165 [2024-07-15 11:44:43.480123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:09.165 [2024-07-15 11:44:43.491435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e9168 00:29:09.165 [2024-07-15 11:44:43.493127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.493157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.507539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:09.166 [2024-07-15 11:44:43.509203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.509234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.520430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e73e0 00:29:09.166 [2024-07-15 11:44:43.522075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.533628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e3498 00:29:09.166 [2024-07-15 11:44:43.534720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.534749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.547812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190ddc00 00:29:09.166 [2024-07-15 11:44:43.549456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.562311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190edd58 00:29:09.166 [2024-07-15 11:44:43.563595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.563625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.576604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f7970 00:29:09.166 [2024-07-15 11:44:43.578502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.578531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.591198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190de8a8 00:29:09.166 [2024-07-15 11:44:43.592653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.592682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.604093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f96f8 00:29:09.166 [2024-07-15 11:44:43.605524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.605553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:09.166 [2024-07-15 11:44:43.619735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190f6890 00:29:09.166 [2024-07-15 11:44:43.621715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.166 [2024-07-15 11:44:43.621744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:09.425 [2024-07-15 11:44:43.633027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:09.425 [2024-07-15 11:44:43.634629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.425 [2024-07-15 11:44:43.634658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:09.425 [2024-07-15 11:44:43.646246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e4140 00:29:09.425 [2024-07-15 11:44:43.647188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.425 [2024-07-15 11:44:43.647218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:09.425 [2024-07-15 11:44:43.659647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190e5658 00:29:09.425 [2024-07-15 11:44:43.660846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.425 [2024-07-15 11:44:43.660875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:09.425 00:29:09.425 Latency(us) 00:29:09.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.425 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.425 nvme0n1 : 2.00 18225.13 71.19 0.00 0.00 7013.83 3559.80 18707.55 00:29:09.425 =================================================================================================================== 00:29:09.425 Total : 18225.13 71.19 0.00 0.00 7013.83 3559.80 18707.55 00:29:09.425 0 00:29:09.425 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.425 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.425 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.425 | .driver_specific 00:29:09.425 | .nvme_error 00:29:09.425 | .status_code 00:29:09.425 | .command_transient_transport_error' 00:29:09.425 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959091 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2959091 ']' 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2959091 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.703 11:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2959091 00:29:09.703 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.703 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.703 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2959091' 00:29:09.703 killing process with pid 2959091 00:29:09.703 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2959091 00:29:09.703 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.703 00:29:09.703 Latency(us) 00:29:09.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.703 =================================================================================================================== 00:29:09.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.703 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2959091 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959741 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959741 /var/tmp/bperf.sock 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2959741 ']' 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.983 11:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.983 [2024-07-15 11:44:44.273612] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:09.983 [2024-07-15 11:44:44.273674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959741 ] 00:29:09.983 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.983 Zero copy mechanism will not be used. 00:29:09.983 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.983 [2024-07-15 11:44:44.354720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.255 [2024-07-15 11:44:44.458720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.821 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.822 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:10.822 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.822 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.080 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.337 nvme0n1 00:29:11.337 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.337 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.337 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.337 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.337 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.338 11:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.596 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.596 Zero copy mechanism will not be used. 00:29:11.596 Running I/O for 2 seconds... 00:29:11.596 [2024-07-15 11:44:45.852855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.853371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.853420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.859979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.860517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.868004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.868539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.868573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.876012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.876490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.884416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.884932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.884965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.892486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.892990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.893022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.901353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.901852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.901884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.909490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.909954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.909986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.917738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.918317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.925939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.926485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.926517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.934029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.934543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.934574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.942580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.943114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.596 [2024-07-15 11:44:45.943146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.596 [2024-07-15 11:44:45.950599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.596 [2024-07-15 11:44:45.951086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.951118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:45.958712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:45.959221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.959252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:45.967164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:45.967680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.967711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:45.975373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:45.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.975912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:45.983547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:45.984038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.984070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:45.992060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:45.992576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:45.992608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.000275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.000787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.000818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.007387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.007893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.007924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.013594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.014100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.014131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.019825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.020330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.020360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.025990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.026511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.032238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.032767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.032798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.038484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.038989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.039019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.044830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.045335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.045365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.050996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.051521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.051557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.597 [2024-07-15 11:44:46.057265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.597 [2024-07-15 11:44:46.057782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.597 [2024-07-15 11:44:46.057813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.063527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.064036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.064066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.069761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.070276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.070307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.076033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.076577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.082322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.082835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.082866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.088547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.089049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.089079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.094746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.095238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.095276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.100966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.101483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.107164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.107684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.107715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.113418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.113920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.113951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.119626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.120120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.120151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.125816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.126338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.126369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.132249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.132774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.132804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.139451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.856 [2024-07-15 11:44:46.139986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.856 [2024-07-15 11:44:46.140016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.856 [2024-07-15 11:44:46.146579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.147076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.147107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.153313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.153804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.159767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.160280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.160310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.166061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.166575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.166606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.172468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.172970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.172999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.178732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.179248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.179297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.184926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.185442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.185473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.191150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.191675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.191706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.197307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.197812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.203482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.203988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.204018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.209763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.210274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.210305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.216267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.216764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.216800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.222978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.223481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.223512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.231296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.231807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.231837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.239133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.239645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.239677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.247168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.247685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.247716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.255245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.255768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.255798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.263327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.263808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.263839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.271301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.271801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.271831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.279218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.279729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.279761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.287053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.287251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.287290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.295394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.295904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.295935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.303735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.304234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.304271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.857 [2024-07-15 11:44:46.312057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:11.857 [2024-07-15 11:44:46.312556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.857 [2024-07-15 11:44:46.312587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.319997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.320561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.327779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.328295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.334300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.334808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.334838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.340979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.341479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.341510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.347091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.347587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.353359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.353906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.359591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.360107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.360137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.365826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.366337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.372075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.372584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.372614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.378298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.378799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.378830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.384470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.384990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.116 [2024-07-15 11:44:46.385021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.116 [2024-07-15 11:44:46.390712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.116 [2024-07-15 11:44:46.391222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.391252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.396899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.397414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.397445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.403116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.403626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.403658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.409358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.409854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.409884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.415542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.416058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.416089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.421761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.422280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.422310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.427988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.428506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.434152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.434697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.440349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.440850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.440881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.446544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.447041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.447073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.452774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.458970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.459487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.459518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.465163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.465689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.465719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.471404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.471912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.471942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.477868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.478381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.478413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.484070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.484593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.484623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.490302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.490816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.490846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.496570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.497087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.497118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.502789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.503340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.508962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.509473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.509509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.515114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.515614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.515645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.521296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.521817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.527561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.528077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.528107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.533801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.534318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.534348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.540067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.540585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.540615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.546246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.546754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.546784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.552460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.552978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.553008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.117 [2024-07-15 11:44:46.558660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.117 [2024-07-15 11:44:46.559179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.117 [2024-07-15 11:44:46.559209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.118 [2024-07-15 11:44:46.564853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.118 [2024-07-15 11:44:46.565373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.118 [2024-07-15 11:44:46.565404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.118 [2024-07-15 11:44:46.571071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.118 [2024-07-15 11:44:46.571593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.118 [2024-07-15 11:44:46.571623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.118 [2024-07-15 11:44:46.577301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.118 [2024-07-15 11:44:46.577811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.118 [2024-07-15 11:44:46.577841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.583532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.584066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.589761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.590286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.590318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.595995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.596518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.596548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.602196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.602709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.602739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.608403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.608909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.614644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.615146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.615177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.620859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.621386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.621417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.627091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.627611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.627642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.633285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.633793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.633822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.639483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.639995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.640026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.645644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.646151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.646182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.651883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.652387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.652417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.658047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.658577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.658608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.664279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.664796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.664826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.670506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.671023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.671058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.676723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.677231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.677268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.682874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.683384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.683414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.689097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.378 [2024-07-15 11:44:46.689619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.378 [2024-07-15 11:44:46.689649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.378 [2024-07-15 11:44:46.695282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.701497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.702045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.707736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.708253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.708291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.713963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.714482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.714512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.720179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.720736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.720767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.726431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.726933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.726964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.732618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.733122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.733152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.738817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.739335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.739366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.745004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.745521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.745552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.751225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.751740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.751771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.757435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.757941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.763680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.764188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.764219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.769869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.770372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.770402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.776076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.776595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.776630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.782291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.782814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.782845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.788511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.789026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.789056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.794688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.795200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.795231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.800900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.807059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.807563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.807593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.813285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.813800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.813830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.819504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.820023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.820053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.825741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.826247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.826284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.831944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.832466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.379 [2024-07-15 11:44:46.838186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.379 [2024-07-15 11:44:46.838695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.379 [2024-07-15 11:44:46.838726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.844361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.844872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.850604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.851117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.856831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.857358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.857389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.863412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.863957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.871294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.871809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.871839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.878298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.878796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.878827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.884970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.885488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.885519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.891726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.892274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.899576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.900082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.900113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.907662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.908189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.916082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.916591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.916622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.924409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.924911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.924942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.933126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.933668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.942616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.943136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.943166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.951022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.951559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.951591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.959309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.959821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.959856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.967609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.968111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.968142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.975349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.975857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.975887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.983132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.983647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.983679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.990998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.991510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.991541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:46.999196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:46.999696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:46.999727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:47.006753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:47.007244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:47.007282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:47.014708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:47.015220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:47.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:47.022293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:47.022795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:47.022824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.639 [2024-07-15 11:44:47.030007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.639 [2024-07-15 11:44:47.030133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.639 [2024-07-15 11:44:47.030161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.037866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.038352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.038383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.045838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.046324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.046354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.052376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.052855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.052886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.058323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.058773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.058803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.064894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.065339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.065369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.071706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.072226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.072263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.079869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.080324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.080355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.087903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.088465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.088495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.640 [2024-07-15 11:44:47.096983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.640 [2024-07-15 11:44:47.097485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.640 [2024-07-15 11:44:47.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.105537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.105986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.106017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.114517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.114967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.114998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.123679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.124212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.124243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.132842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.133391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.133422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.141505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.142055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.149792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.150311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.150342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.158464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.158959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.158989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.166946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.167467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.167503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.174881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.175333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.175365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.182706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.183200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.183232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.189667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.190112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.190143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.195938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.196381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.196414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.202978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.203418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.203449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.209775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.210230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.210267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.217209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.217697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.224874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.225329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.225359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.232466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.241567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.242125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.242154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.249846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.899 [2024-07-15 11:44:47.250336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.899 [2024-07-15 11:44:47.256598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.899 [2024-07-15 11:44:47.257040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.257070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.263083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.263521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.269962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.270419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.270449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.277763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.278283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.278312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.285901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.286419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.286449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.294691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.295222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.295266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.303338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.303825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.311048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.311518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.311549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.318862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.319319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.319350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.326566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.327006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.327036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.335144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.335585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.335615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.342736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.343187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.343217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.349193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.349675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.355520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:12.900 [2024-07-15 11:44:47.355963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.900 [2024-07-15 11:44:47.355994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.900 [2024-07-15 11:44:47.361663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.159 [2024-07-15 11:44:47.362116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-07-15 11:44:47.362147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.159 [2024-07-15 11:44:47.368012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.159 [2024-07-15 11:44:47.368457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-07-15 11:44:47.368488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.159 [2024-07-15 11:44:47.374489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.159 [2024-07-15 11:44:47.374944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-07-15 11:44:47.374974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.159 [2024-07-15 11:44:47.381348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.159 [2024-07-15 11:44:47.381798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-07-15 11:44:47.381828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.159 [2024-07-15 11:44:47.387734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.159 [2024-07-15 11:44:47.388186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-07-15 11:44:47.388217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.159 [2024-07-15 11:44:47.393659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.394110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.400165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.400558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.400588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.406812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.407219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.407248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.412362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.412718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.417626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.417956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.417986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.422702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.423004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.423036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.428027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.428345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.428376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.433859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.434147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.434178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.439763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.440064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.445833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.446130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.446160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.452008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.452556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.452587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.457533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.457831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.462550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.462889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.467533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.467835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.467866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.472433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.472737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.472767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.477697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.477995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.478026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.482695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.482996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.483027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.487778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.488069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.488099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.493650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.493956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.493985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.498777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.499113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.503678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.503979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.504009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.508663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.508962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.508992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.513601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.513906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.513936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.518506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.518809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.518839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.523453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.523759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.523789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.528412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.528725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.528756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.533342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.533637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.533668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.539123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.539425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.539455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.544545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.544862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-07-15 11:44:47.544892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.160 [2024-07-15 11:44:47.549563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.160 [2024-07-15 11:44:47.549866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.549895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.554462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.554773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.554803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.559443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.559737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.559767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.564384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.564686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.564716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.569318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.569631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.574247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.574568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.579198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.579501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.579532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.584175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.584502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.584532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.589116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.589420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.589450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.594090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.594407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.594442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.599025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.599334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.599364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.603980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.604288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.604318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.608914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.609245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.613784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.614100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.161 [2024-07-15 11:44:47.618694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.161 [2024-07-15 11:44:47.618996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-07-15 11:44:47.619026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.623610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.623915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.623944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.628579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.628886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.628916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.633495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.633804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.633835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.638427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.638736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.638766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.643420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.643726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.643756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.648355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.648669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.648699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.653528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.653861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.653891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.660009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.660353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.666134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.666466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.666496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.672364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.672714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.672744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.678552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.678846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.678877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.683599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.683923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.683953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.688624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.688944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.688974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.693618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.693924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.693954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.698718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.699019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.699049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.703755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.704069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.704099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.708830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.709131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.709161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.713816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.714123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.714153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.718841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.421 [2024-07-15 11:44:47.719150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.421 [2024-07-15 11:44:47.719180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.421 [2024-07-15 11:44:47.723841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.724152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.724182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.728994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.729304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.729340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.733986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.734315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.739565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.739872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.739902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.745659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.751885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.752181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.752211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.756949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.757265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.757295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.761906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.762235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.766930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.767239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.771980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.772295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.772326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.776974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.777293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.782025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.782334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.782364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.787099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.787429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.787459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.793015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.793374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.793404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.799615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.799963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.799993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.806652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.807054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.807084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.814179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.814482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.814513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.819988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.820303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.820333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.825042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.825359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.825395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.829993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.830309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.830340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.835030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.835351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.835381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.840023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.422 [2024-07-15 11:44:47.845110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x210acd0) with pdu=0x2000190fef90 00:29:13.422 [2024-07-15 11:44:47.845422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-07-15 11:44:47.845453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.422 00:29:13.422 Latency(us) 00:29:13.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.422 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.422 nvme0n1 : 2.00 4760.66 595.08 0.00 0.00 3353.42 2278.87 10009.13 00:29:13.422 =================================================================================================================== 00:29:13.422 Total : 4760.66 595.08 0.00 0.00 3353.42 2278.87 10009.13 00:29:13.422 0 00:29:13.422 11:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.422 11:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.422 11:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.422 | .driver_specific 00:29:13.422 | .nvme_error 00:29:13.422 | .status_code 00:29:13.422 | .command_transient_transport_error' 00:29:13.422 11:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 307 > 0 )) 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959741 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2959741 ']' 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2959741 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:13.682 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2959741 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2959741' 00:29:13.941 killing process with pid 2959741 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2959741 00:29:13.941 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.941 00:29:13.941 Latency(us) 00:29:13.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.941 =================================================================================================================== 00:29:13.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.941 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2959741 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2957347 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2957347 ']' 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2957347 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2957347 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2957347' 00:29:14.199 killing process with pid 2957347 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2957347 00:29:14.199 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2957347 00:29:14.458 00:29:14.458 real 0m18.104s 00:29:14.458 user 0m36.686s 00:29:14.458 sys 0m4.353s 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.458 ************************************ 00:29:14.458 END TEST nvmf_digest_error 00:29:14.458 ************************************ 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:14.458 rmmod nvme_tcp 00:29:14.458 rmmod nvme_fabrics 00:29:14.458 rmmod nvme_keyring 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2957347 ']' 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2957347 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2957347 ']' 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2957347 00:29:14.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2957347) - No such process 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2957347 is not found' 00:29:14.458 Process with pid 2957347 is not found 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.458 11:44:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.995 11:44:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.995 00:29:16.995 real 0m45.340s 00:29:16.995 user 1m16.395s 00:29:16.995 sys 0m13.412s 00:29:16.995 11:44:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:16.995 11:44:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:16.995 ************************************ 00:29:16.995 END TEST nvmf_digest 00:29:16.995 ************************************ 00:29:16.995 11:44:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:16.995 11:44:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:16.995 11:44:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:16.995 11:44:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:16.995 11:44:50 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.995 11:44:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:16.995 11:44:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.995 11:44:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.995 ************************************ 00:29:16.995 START TEST nvmf_bdevperf 00:29:16.995 ************************************ 00:29:16.995 11:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.995 * Looking for test storage... 00:29:16.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.995 11:44:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.995 11:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.995 11:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:22.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:22.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:22.271 Found net devices under 0000:af:00.0: cvl_0_0 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:22.271 Found net devices under 0000:af:00.1: cvl_0_1 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:22.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:29:22.271 00:29:22.271 --- 10.0.0.2 ping statistics --- 00:29:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.271 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:29:22.271 00:29:22.271 --- 10.0.0.1 ping statistics --- 00:29:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.271 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.271 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2964018 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2964018 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2964018 ']' 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.272 11:44:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.272 [2024-07-15 11:44:56.644454] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:22.272 [2024-07-15 11:44:56.644511] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.272 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.272 [2024-07-15 11:44:56.730291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.531 [2024-07-15 11:44:56.840880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.531 [2024-07-15 11:44:56.840926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.531 [2024-07-15 11:44:56.840941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.531 [2024-07-15 11:44:56.840952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.531 [2024-07-15 11:44:56.840961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.531 [2024-07-15 11:44:56.841085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.531 [2024-07-15 11:44:56.844291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.531 [2024-07-15 11:44:56.844297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.790 [2024-07-15 11:44:57.216272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.790 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.049 Malloc0 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.049 [2024-07-15 11:44:57.283666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.049 { 00:29:23.049 "params": { 00:29:23.049 "name": "Nvme$subsystem", 00:29:23.049 "trtype": "$TEST_TRANSPORT", 00:29:23.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.049 "adrfam": "ipv4", 00:29:23.049 "trsvcid": "$NVMF_PORT", 00:29:23.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.049 "hdgst": ${hdgst:-false}, 00:29:23.049 "ddgst": ${ddgst:-false} 00:29:23.049 }, 00:29:23.049 "method": "bdev_nvme_attach_controller" 00:29:23.049 } 00:29:23.049 EOF 00:29:23.049 )") 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:23.049 11:44:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.049 "params": { 00:29:23.049 "name": "Nvme1", 00:29:23.049 "trtype": "tcp", 00:29:23.049 "traddr": "10.0.0.2", 00:29:23.049 "adrfam": "ipv4", 00:29:23.049 "trsvcid": "4420", 00:29:23.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.049 "hdgst": false, 00:29:23.049 "ddgst": false 00:29:23.049 }, 00:29:23.049 "method": "bdev_nvme_attach_controller" 00:29:23.049 }' 00:29:23.049 [2024-07-15 11:44:57.338963] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:23.049 [2024-07-15 11:44:57.339020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964206 ] 00:29:23.049 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.049 [2024-07-15 11:44:57.420222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.049 [2024-07-15 11:44:57.506979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.307 Running I/O for 1 seconds... 00:29:24.683 00:29:24.683 Latency(us) 00:29:24.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.683 Verification LBA range: start 0x0 length 0x4000 00:29:24.683 Nvme1n1 : 1.01 6180.57 24.14 0.00 0.00 20543.18 4081.11 17754.30 00:29:24.683 =================================================================================================================== 00:29:24.683 Total : 6180.57 24.14 0.00 0.00 20543.18 4081.11 17754.30 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2964475 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:24.683 { 00:29:24.683 "params": { 00:29:24.683 "name": "Nvme$subsystem", 00:29:24.683 "trtype": "$TEST_TRANSPORT", 00:29:24.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.683 "adrfam": "ipv4", 00:29:24.683 "trsvcid": "$NVMF_PORT", 00:29:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.683 "hdgst": ${hdgst:-false}, 00:29:24.683 "ddgst": ${ddgst:-false} 00:29:24.683 }, 00:29:24.683 "method": "bdev_nvme_attach_controller" 00:29:24.683 } 00:29:24.683 EOF 00:29:24.683 )") 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:24.683 11:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:24.683 "params": { 00:29:24.683 "name": "Nvme1", 00:29:24.683 "trtype": "tcp", 00:29:24.683 "traddr": "10.0.0.2", 00:29:24.683 "adrfam": "ipv4", 00:29:24.683 "trsvcid": "4420", 00:29:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.683 "hdgst": false, 00:29:24.683 "ddgst": false 00:29:24.683 }, 00:29:24.683 "method": "bdev_nvme_attach_controller" 00:29:24.683 }' 00:29:24.683 [2024-07-15 11:44:58.981854] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:24.683 [2024-07-15 11:44:58.981916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964475 ] 00:29:24.683 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.683 [2024-07-15 11:44:59.064417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.683 [2024-07-15 11:44:59.145985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.941 Running I/O for 15 seconds... 00:29:28.230 11:45:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2964018 00:29:28.230 11:45:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:28.230 [2024-07-15 11:45:01.950413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.230 [2024-07-15 11:45:01.950668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.230 [2024-07-15 11:45:01.950682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.950982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.950997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.231 [2024-07-15 11:45:01.951563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.231 [2024-07-15 11:45:01.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.951982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.951991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.232 [2024-07-15 11:45:01.952351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.232 [2024-07-15 11:45:01.952461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.232 [2024-07-15 11:45:01.952473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.952981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.952993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.233 [2024-07-15 11:45:01.953253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.233 [2024-07-15 11:45:01.953269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.234 [2024-07-15 11:45:01.953290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.234 [2024-07-15 11:45:01.953312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.234 [2024-07-15 11:45:01.953340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.234 [2024-07-15 11:45:01.953361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.234 [2024-07-15 11:45:01.953383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b291c0 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:01.953405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:28.234 [2024-07-15 11:45:01.953413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:28.234 [2024-07-15 11:45:01.953421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129608 len:8 PRP1 0x0 PRP2 0x0 00:29:28.234 [2024-07-15 11:45:01.953431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.234 [2024-07-15 11:45:01.953480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b291c0 was disconnected and freed. reset controller. 00:29:28.234 [2024-07-15 11:45:01.957905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:01.957967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:01.958756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:01.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:01.958788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:01.959054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:01.959327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:01.959339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:01.959350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:01.963606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:01.972912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:01.973335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:01.973358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:01.973369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:01.973634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:01.973900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:01.973915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:01.973926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:01.978182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:01.987476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:01.988017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:01.988060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:01.988082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:01.988677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:01.989267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:01.989293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:01.989313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:01.993590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:02.002112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:02.002645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:02.002667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:02.002677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:02.002941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:02.003205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:02.003216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:02.003226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:02.007495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:02.016784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:02.017271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:02.017292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:02.017302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:02.017568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:02.017832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:02.017845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:02.017855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:02.022104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:02.031388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:02.031922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:02.031943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:02.031954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:02.032218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:02.032490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:02.032503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:02.032512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:02.036761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:02.046060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:02.046595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:02.046616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:02.046626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:02.046891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:02.047157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:02.047168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-07-15 11:45:02.047178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-07-15 11:45:02.051442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-07-15 11:45:02.060720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-07-15 11:45:02.061192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-07-15 11:45:02.061214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-07-15 11:45:02.061224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.234 [2024-07-15 11:45:02.061495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.234 [2024-07-15 11:45:02.061761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-07-15 11:45:02.061772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.061781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.066024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.075299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.075854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.075875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.075885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.076153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.076427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.076439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.076449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.080695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.089979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.090516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.090566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.090587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.091167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.091479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.091492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.091502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.095744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.104563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.105020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.105042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.105052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.105324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.105590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.105601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.105610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.109860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.119160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.119720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.119742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.119752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.120016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.120288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.120300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.120313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.124555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.133844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.134396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.134441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.134463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.135043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.135451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.135463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.135473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.139724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.148512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.149012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.149033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.149043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.235 [2024-07-15 11:45:02.149314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.235 [2024-07-15 11:45:02.149581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-07-15 11:45:02.149592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-07-15 11:45:02.149602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-07-15 11:45:02.153857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-07-15 11:45:02.163149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-07-15 11:45:02.163637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-07-15 11:45:02.163659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-07-15 11:45:02.163670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.163934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.164199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.164211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.164220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.168471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.177744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.178295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.178347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.178370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.178923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.179188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.179200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.179208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.183460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.192494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.193056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.193098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.193119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.193711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.194015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.194026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.194035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.198290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.207060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.207562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.207584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.207594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.207858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.208123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.208135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.208144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.212401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.221695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.222273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.222315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.222338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.222872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.223143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.223154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.223163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.227423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.236467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.236901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.236923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.236933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.237197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.237472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.237484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.237494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.241745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.251032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.251504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.251525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.251535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.251800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.252066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.252078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.252087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.236 [2024-07-15 11:45:02.256334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.236 [2024-07-15 11:45:02.265612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.236 [2024-07-15 11:45:02.266150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-07-15 11:45:02.266172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.236 [2024-07-15 11:45:02.266182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.236 [2024-07-15 11:45:02.266453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.236 [2024-07-15 11:45:02.266720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.236 [2024-07-15 11:45:02.266731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.236 [2024-07-15 11:45:02.266741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.270984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.280258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.280774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.280815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.280837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.281429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.281970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.281982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.281991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.286235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.295017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.295551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.295573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.295583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.295848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.296113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.296124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.296133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.300391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.309699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.310225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.310247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.310269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.310534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.310798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.310809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.310819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.315082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.324378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.324911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.324953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.324982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.325571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.326018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.326035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.326047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.332305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.339487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.340033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.340081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.340103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.340695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.340997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.341009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.341019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.345275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.354050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.354615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.354657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.354678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.355224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.355495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.355507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.355517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.359762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.368796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.237 [2024-07-15 11:45:02.369296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-07-15 11:45:02.369318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.237 [2024-07-15 11:45:02.369328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.237 [2024-07-15 11:45:02.369592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.237 [2024-07-15 11:45:02.369857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.237 [2024-07-15 11:45:02.369873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.237 [2024-07-15 11:45:02.369882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.237 [2024-07-15 11:45:02.374138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.237 [2024-07-15 11:45:02.383428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.383956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.383977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.383986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.384251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.384523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.384534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.384543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.388797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.398078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.398608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.398658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.398679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.399271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.399568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.399580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.399589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.403869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.412648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.413174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.413196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.413205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.413477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.413742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.413754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.413763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.418012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.427287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.427820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.427841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.427851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.428115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.428387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.428399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.428408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.432661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.441934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.442474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.442495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.442505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.442769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.443034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.443046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.443055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.447304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.456570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.457103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.457124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.457134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.457404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.457670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.457681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.238 [2024-07-15 11:45:02.457690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.238 [2024-07-15 11:45:02.461944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.238 [2024-07-15 11:45:02.471231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.238 [2024-07-15 11:45:02.471785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.238 [2024-07-15 11:45:02.471806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.238 [2024-07-15 11:45:02.471820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.238 [2024-07-15 11:45:02.472084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.238 [2024-07-15 11:45:02.472354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.238 [2024-07-15 11:45:02.472367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.472376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.476874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.485915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.486417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.486463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.486487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.487066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.487438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.487451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.487460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.493736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.501121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.501606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.501627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.501637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.501900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.502165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.502177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.502186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.506447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.515772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.516311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.516355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.516377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.516956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.517551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.517583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.517592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.521843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.530374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.530978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.531021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.531045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.531391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.531656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.531668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.531679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.535935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.544975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.545532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.545596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.546175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.546552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.546565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.546574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.550831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.559631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.239 [2024-07-15 11:45:02.560196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.239 [2024-07-15 11:45:02.560238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.239 [2024-07-15 11:45:02.560271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.239 [2024-07-15 11:45:02.560818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.239 [2024-07-15 11:45:02.561083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.239 [2024-07-15 11:45:02.561094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.239 [2024-07-15 11:45:02.561104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.239 [2024-07-15 11:45:02.565356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.239 [2024-07-15 11:45:02.574390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.574946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.574966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.574976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.575240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.575513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.575525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.575534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.579778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.589068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.589646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.589668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.589678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.589943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.590207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.590218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.590228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.594481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.603761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.604305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.604326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.604336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.604599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.604863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.604875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.604884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.609132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.618416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.619009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.619051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.619072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.619606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.619953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.619970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.619984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.626223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.633400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.633909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.633930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.633940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.634203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.634476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.634488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.634497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.638746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.648032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.648508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.648529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.648539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.648802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.649067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.649079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.240 [2024-07-15 11:45:02.649088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.240 [2024-07-15 11:45:02.653344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.240 [2024-07-15 11:45:02.662617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.240 [2024-07-15 11:45:02.663175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.240 [2024-07-15 11:45:02.663217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.240 [2024-07-15 11:45:02.663238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.240 [2024-07-15 11:45:02.663712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.240 [2024-07-15 11:45:02.663978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.240 [2024-07-15 11:45:02.663989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.241 [2024-07-15 11:45:02.664003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.241 [2024-07-15 11:45:02.668250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.241 [2024-07-15 11:45:02.677296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.241 [2024-07-15 11:45:02.677696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.241 [2024-07-15 11:45:02.677717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.241 [2024-07-15 11:45:02.677727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.241 [2024-07-15 11:45:02.677991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.241 [2024-07-15 11:45:02.678263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.241 [2024-07-15 11:45:02.678275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.241 [2024-07-15 11:45:02.678284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.241 [2024-07-15 11:45:02.682539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.692082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.692565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.692586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.692596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.692862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.693125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.693137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.693147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.697403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.706691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.707988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.708018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.708030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.708312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.708579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.708590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.708600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.712848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.721472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.721951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.721978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.721988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.722253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.722528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.722540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.722549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.726800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.736094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.736548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.736570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.736580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.736845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.737111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.737123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.737134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.741400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.750679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.751179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.751200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.751210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.751481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.751747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.751759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.751768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.756023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.765317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.765773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.765795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.765805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.766069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.766343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.766356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.766365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.770608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.779898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.780384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.780406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.780416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.780680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.780943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.780957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.780968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.785217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.794520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.795107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.795129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.795139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.795411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.795676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.795688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.795697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.799945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.809228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.809759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.809781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.809791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.810054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.502 [2024-07-15 11:45:02.810326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.502 [2024-07-15 11:45:02.810338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.502 [2024-07-15 11:45:02.810348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.502 [2024-07-15 11:45:02.814600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.502 [2024-07-15 11:45:02.823872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.502 [2024-07-15 11:45:02.824267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-07-15 11:45:02.824289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.502 [2024-07-15 11:45:02.824299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.502 [2024-07-15 11:45:02.824563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.824827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.824838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.824848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.829090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.838624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.839169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.839190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.839201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.839471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.839735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.839746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.839755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.844006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.853292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.853805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.853826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.853837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.854100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.854373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.854386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.854395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.858636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.867913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.868516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.868558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.868587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.869166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.869498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.869509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.869519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.873770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.882558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.883056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.883097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.883119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.883616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.883881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.883892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.883901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.888151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.897202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.897674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.897695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.897705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.897969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.898232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.898243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.898253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.902515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.911808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.912361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.912383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.912393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.912656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.912922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.912937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.912946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.917200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.926502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.926999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.927020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.927031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.927301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.927567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.927578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.927587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.931867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.941162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.941622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.941644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.941654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.941919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.942183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.942194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.942204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.946456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.503 [2024-07-15 11:45:02.955739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.503 [2024-07-15 11:45:02.956267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-07-15 11:45:02.956289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.503 [2024-07-15 11:45:02.956299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.503 [2024-07-15 11:45:02.956563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.503 [2024-07-15 11:45:02.956827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.503 [2024-07-15 11:45:02.956838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.503 [2024-07-15 11:45:02.956848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.503 [2024-07-15 11:45:02.961091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:02.970382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:02.970864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:02.970885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:02.970895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:02.971158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:02.971431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:02.971443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:02.971452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:02.975699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:02.985095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:02.985579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:02.985602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:02.985612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:02.985876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:02.986140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:02.986151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:02.986161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:02.990428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:02.999695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:03.000248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:03.000275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:03.000286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:03.000549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:03.000813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:03.000825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:03.000834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:03.005086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:03.014376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:03.014953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:03.014974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:03.014985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:03.015253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:03.015528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:03.015540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:03.015549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:03.019795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:03.029075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:03.029606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:03.029628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:03.029638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:03.029902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:03.030166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:03.030178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:03.030187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:03.034441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:03.043725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:03.044290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:03.044333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:03.044355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:03.044827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.764 [2024-07-15 11:45:03.045091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.764 [2024-07-15 11:45:03.045102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.764 [2024-07-15 11:45:03.045111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.764 [2024-07-15 11:45:03.049375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.764 [2024-07-15 11:45:03.058398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.764 [2024-07-15 11:45:03.058968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-07-15 11:45:03.059010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.764 [2024-07-15 11:45:03.059031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.764 [2024-07-15 11:45:03.059623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.059933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.059945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.059958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.064207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.072973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.073508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.073529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.073539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.073803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.074068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.074079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.074088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.078341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.087605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.088130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.088150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.088160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.088440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.088705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.088716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.088726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.092967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.102234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.102807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.102873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.103466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.103977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.103988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.103998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.108294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.116806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.117373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.117394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.117404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.117668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.117933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.117944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.117953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.122205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.131481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.132049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.132090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.132111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.132716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.132981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.132992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.133002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.137244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.146040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.146519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.146562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.146584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.147161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.147581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.147598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.147611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.153843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.161458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.161989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.162040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.162061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.162662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.163007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.163019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.163028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.167277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.176049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.176623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.176665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.176686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.177217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.177489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.177501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.177510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.181752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.190782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.191349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.191390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.191412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.191961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.192226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.192237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.192246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.196502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.205522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.765 [2024-07-15 11:45:03.206056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-07-15 11:45:03.206098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.765 [2024-07-15 11:45:03.206120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.765 [2024-07-15 11:45:03.206715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.765 [2024-07-15 11:45:03.207225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.765 [2024-07-15 11:45:03.207236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.765 [2024-07-15 11:45:03.207250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.765 [2024-07-15 11:45:03.211497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.765 [2024-07-15 11:45:03.220269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.766 [2024-07-15 11:45:03.220838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-07-15 11:45:03.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:28.766 [2024-07-15 11:45:03.220901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:28.766 [2024-07-15 11:45:03.221339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:28.766 [2024-07-15 11:45:03.221604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.766 [2024-07-15 11:45:03.221615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.766 [2024-07-15 11:45:03.221624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.766 [2024-07-15 11:45:03.225874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.234905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.235464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.235506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.235528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.236117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.236387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.236398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.236408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.240653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.249671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.250200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.250220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.250231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.250501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.250767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.250778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.250787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.255035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.264311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.264785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.264810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.264820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.265085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.265357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.265369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.265378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.269628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.278907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.279471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.279492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.279502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.279766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.280030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.280041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.280050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.284310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.293597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.294146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.294168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.294177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.294448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.294713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.294724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.294733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.298979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.308262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.308812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.308833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.308843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.309108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.309384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.309397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.309406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.313654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.025 [2024-07-15 11:45:03.322928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.025 [2024-07-15 11:45:03.323476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.025 [2024-07-15 11:45:03.323498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.025 [2024-07-15 11:45:03.323507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.025 [2024-07-15 11:45:03.323772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.025 [2024-07-15 11:45:03.324037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.025 [2024-07-15 11:45:03.324048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.025 [2024-07-15 11:45:03.324057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.025 [2024-07-15 11:45:03.328319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.337598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.338129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.338149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.338159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.338429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.338694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.338706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.338715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.342963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.352268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.352803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.352824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.352834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.353098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.353369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.353381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.353390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.357637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.366910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.367450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.367471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.367482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.367745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.368010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.368022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.368031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.372280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.381558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.382110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.382131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.382141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.382410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.382676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.382687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.382696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.386950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.396238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.396807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.396848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.396869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.397463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.397770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.397781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.397790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.402046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.410824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.411407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.411428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.411446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.411709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.411973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.411984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.411993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.416250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.425546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.426094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.426115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.426125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.426395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.426660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.426671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.426680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.430934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.440222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.440781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.440824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.440845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.441436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.441774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.441786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.441795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.447996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.455620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.456177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.456198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.456208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.456479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.456745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.456760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.456769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.461022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.470297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.470841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.470862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.470872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.471136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.471407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.471419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.471428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.026 [2024-07-15 11:45:03.475906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.026 [2024-07-15 11:45:03.484944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.026 [2024-07-15 11:45:03.485439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.026 [2024-07-15 11:45:03.485462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.026 [2024-07-15 11:45:03.485472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.026 [2024-07-15 11:45:03.485736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.026 [2024-07-15 11:45:03.486000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.026 [2024-07-15 11:45:03.486012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.026 [2024-07-15 11:45:03.486021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.490283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.499562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.500120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.500142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.500152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.500424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.500689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.500701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.500710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.504956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.514231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.514784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.514805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.514815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.515078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.515350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.515363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.515372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.519610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.528879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.529340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.529362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.529372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.529636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.529900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.529911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.529920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.534169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.543442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.543995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.544016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.544026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.544297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.544562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.544573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.544583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.548827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.558130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.558696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.558717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.558727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.558995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.559266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.559278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.559287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.563534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.572801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.573363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.573406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.573428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.573956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.574220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.574232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.574241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.578494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.587513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.588100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.588142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.588163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.588770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.589101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.589113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.589122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.593376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.602146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.602710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.602732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.602742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.603006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.603277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.603289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.603302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.607550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.616828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.617333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.617355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.617365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.617629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.617893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.617904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.617913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.622167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.631447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.632022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.286 [2024-07-15 11:45:03.632064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.286 [2024-07-15 11:45:03.632085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.286 [2024-07-15 11:45:03.632677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.286 [2024-07-15 11:45:03.633268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.286 [2024-07-15 11:45:03.633292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.286 [2024-07-15 11:45:03.633313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.286 [2024-07-15 11:45:03.637653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.286 [2024-07-15 11:45:03.646177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.286 [2024-07-15 11:45:03.646755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.646797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.646818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.647158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.647430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.647442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.647451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.651704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.660720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.661280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.661301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.661311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.661576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.661840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.661852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.661861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.666116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.675383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.675915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.675936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.675946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.676210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.676482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.676494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.676503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.680752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.690031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.690577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.690599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.690610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.690875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.691139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.691151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.691160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.695420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.704698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.705226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.705247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.705263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.705528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.705795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.705806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.705815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.710061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.719338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.719808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.719838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.720101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.720373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.720385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.720394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.724648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.287 [2024-07-15 11:45:03.733927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.287 [2024-07-15 11:45:03.734487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.287 [2024-07-15 11:45:03.734531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.287 [2024-07-15 11:45:03.734553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.287 [2024-07-15 11:45:03.735131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.287 [2024-07-15 11:45:03.735613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.287 [2024-07-15 11:45:03.735625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.287 [2024-07-15 11:45:03.735634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.287 [2024-07-15 11:45:03.739889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.748841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.749311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.749334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.749344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.749608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.749874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.749886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.749895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.754152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.763467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.764033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.764054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.764064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.764334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.764599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.764610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.764620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.768867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.778144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.778718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.778761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.778782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.779373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.779697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.779709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.779718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.783963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.792745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.793274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.793296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.793306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.793569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.793834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.793846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.793855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.798104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.807391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.807984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.808033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.808055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.808578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.808844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.808856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.808865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.813114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.822141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.822715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.822758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.822779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.823371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.823721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.823733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.823742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.827993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.836769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.837356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.837398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.837419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.837870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.838134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.838145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.838154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.842405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.851438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.851991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.852012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.852022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.852294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.852563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.852574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.852583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.856834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.866109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.866680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.866701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.866711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.866976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.867242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.867260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.867271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.871518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.880783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.881342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.881396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.881418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.881952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.882217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.548 [2024-07-15 11:45:03.882228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.548 [2024-07-15 11:45:03.882237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.548 [2024-07-15 11:45:03.886492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.548 [2024-07-15 11:45:03.895530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.548 [2024-07-15 11:45:03.896090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:45:03.896132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.548 [2024-07-15 11:45:03.896153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.548 [2024-07-15 11:45:03.896701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.548 [2024-07-15 11:45:03.896973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.896985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.896995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.901237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.910260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.910821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.910862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.910883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.911477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.911890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.911901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.911912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.916160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.924935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.925463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.925494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.925759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.926022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.926033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.926043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.930296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.939567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.940096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.940144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.940166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.940760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.941097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.941109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.941118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.945364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.954125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.954683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.954704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.954718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.954983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.955247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.955266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.955275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.959521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.968789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.969363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.969408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.969430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.970009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.970318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.970330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.970340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.974584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.983353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.983917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.983959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.983980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.984573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.984955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.984967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.984976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:03.989226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.549 [2024-07-15 11:45:03.997998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.549 [2024-07-15 11:45:03.998555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:45:03.998576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.549 [2024-07-15 11:45:03.998586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.549 [2024-07-15 11:45:03.998850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.549 [2024-07-15 11:45:03.999114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.549 [2024-07-15 11:45:03.999129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.549 [2024-07-15 11:45:03.999138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.549 [2024-07-15 11:45:04.003488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.012765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.013249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.013277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.013288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.013551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.013816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.013827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.013837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.018087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.027370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.027813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.027834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.027844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.028108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.028378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.028391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.028400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.032642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.041913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.042466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.042487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.042497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.042761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.043026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.043037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.043046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.047297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.056583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.057142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.057163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.057173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.057443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.057708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.057719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.057729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.061972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.071246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.071814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.071835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.071845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.072109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.072383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.072395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.072404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.076660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.085960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.086433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.086455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.086466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.086729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.086994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.087006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.087015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.091281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.100552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.101109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.101151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.101172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.101773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.102321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.102333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.102343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.106593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.115131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.115635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.115657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.115666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.115930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.116194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.116206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.116215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.120471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.129753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.130330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.130351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.130361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.810 [2024-07-15 11:45:04.130625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.810 [2024-07-15 11:45:04.130889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.810 [2024-07-15 11:45:04.130899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.810 [2024-07-15 11:45:04.130909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.810 [2024-07-15 11:45:04.135161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.810 [2024-07-15 11:45:04.144451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.810 [2024-07-15 11:45:04.144967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.810 [2024-07-15 11:45:04.145010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.810 [2024-07-15 11:45:04.145031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.145501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.145766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.145777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.145795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.150051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.159081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.159585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.159606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.159616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.159879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.160146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.160157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.160166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.164425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.173708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.174179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.174200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.174210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.174482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.174747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.174758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.174767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.179039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.188340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.188909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.188941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.189205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.189480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.189491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.189501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.193750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.203032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.203525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.203546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.203556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.203821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.204086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.204097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.204107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.208365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.217668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.218239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.218292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.218315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.218755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.219019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.219030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.219039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.223291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.232315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.232850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.232891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.232912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.233495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.233761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.233773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.233782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.238034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.247066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.247533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.247575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.247597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.248175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.248580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.248593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.248602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.252847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.811 [2024-07-15 11:45:04.261635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.811 [2024-07-15 11:45:04.262217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.811 [2024-07-15 11:45:04.262238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:29.811 [2024-07-15 11:45:04.262248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:29.811 [2024-07-15 11:45:04.262520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:29.811 [2024-07-15 11:45:04.262785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.811 [2024-07-15 11:45:04.262796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.811 [2024-07-15 11:45:04.262805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.811 [2024-07-15 11:45:04.267056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.073 [2024-07-15 11:45:04.276347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.073 [2024-07-15 11:45:04.276911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-15 11:45:04.276953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.073 [2024-07-15 11:45:04.276974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.073 [2024-07-15 11:45:04.277567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.073 [2024-07-15 11:45:04.277877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.073 [2024-07-15 11:45:04.277889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.073 [2024-07-15 11:45:04.277898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.073 [2024-07-15 11:45:04.282142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.073 [2024-07-15 11:45:04.290932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.073 [2024-07-15 11:45:04.291507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-15 11:45:04.291529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.073 [2024-07-15 11:45:04.291539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.073 [2024-07-15 11:45:04.291803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.073 [2024-07-15 11:45:04.292069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.073 [2024-07-15 11:45:04.292080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.073 [2024-07-15 11:45:04.292089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.073 [2024-07-15 11:45:04.296346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.073 [2024-07-15 11:45:04.305632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.073 [2024-07-15 11:45:04.306107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-15 11:45:04.306128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.073 [2024-07-15 11:45:04.306138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.073 [2024-07-15 11:45:04.306408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.073 [2024-07-15 11:45:04.306674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.073 [2024-07-15 11:45:04.306686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.073 [2024-07-15 11:45:04.306695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.073 [2024-07-15 11:45:04.310942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.073 [2024-07-15 11:45:04.320222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.073 [2024-07-15 11:45:04.320746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-15 11:45:04.320768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.073 [2024-07-15 11:45:04.320778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.073 [2024-07-15 11:45:04.321042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.073 [2024-07-15 11:45:04.321314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.073 [2024-07-15 11:45:04.321326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.073 [2024-07-15 11:45:04.321335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.325580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.334861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.335427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.335470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.335491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.335989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.336261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.336273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.336282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.340533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.349563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.350058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.350079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.350093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.350365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.350630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.350641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.350651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.354900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.364182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.364661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.364683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.364693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.364957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.365221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.365232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.365241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.369495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.378781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.379231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.379284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.379306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.379886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.380151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.380162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.380171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.384430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.393503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.393981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.394002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.394013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.394284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.394550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.394565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.394575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.398816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.408103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.408590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.408612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.408622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.408885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.409151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.074 [2024-07-15 11:45:04.409163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.074 [2024-07-15 11:45:04.409172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.074 [2024-07-15 11:45:04.413428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.074 [2024-07-15 11:45:04.422720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.074 [2024-07-15 11:45:04.423288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-15 11:45:04.423331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.074 [2024-07-15 11:45:04.423353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.074 [2024-07-15 11:45:04.423927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.074 [2024-07-15 11:45:04.424192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.424203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.424212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.428465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.437500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.438037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.438058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.438068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.438341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.438606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.438617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.438626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.442873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.452187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.452752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.452774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.452784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.453050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.453322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.453334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.453344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.457586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.466874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.467321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.467343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.467354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.467617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.467882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.467894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.467903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.472151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.481668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.482209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.482265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.482289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.482868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.483389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.483401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.483411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.487668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.496469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.496999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.497021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.497035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.497307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.497573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.497584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.497594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.501843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.511130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.075 [2024-07-15 11:45:04.511607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-15 11:45:04.511628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.075 [2024-07-15 11:45:04.511638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.075 [2024-07-15 11:45:04.511902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.075 [2024-07-15 11:45:04.512165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.075 [2024-07-15 11:45:04.512176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.075 [2024-07-15 11:45:04.512185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.075 [2024-07-15 11:45:04.516432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.075 [2024-07-15 11:45:04.525731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.076 [2024-07-15 11:45:04.526281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-15 11:45:04.526302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.076 [2024-07-15 11:45:04.526312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.076 [2024-07-15 11:45:04.526576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.076 [2024-07-15 11:45:04.526841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.076 [2024-07-15 11:45:04.526852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.076 [2024-07-15 11:45:04.526861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.076 [2024-07-15 11:45:04.531112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.336 [2024-07-15 11:45:04.540401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.336 [2024-07-15 11:45:04.540966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-07-15 11:45:04.540987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.336 [2024-07-15 11:45:04.540998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.336 [2024-07-15 11:45:04.541269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.336 [2024-07-15 11:45:04.541536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.336 [2024-07-15 11:45:04.541551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.336 [2024-07-15 11:45:04.541560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.336 [2024-07-15 11:45:04.545806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.336 [2024-07-15 11:45:04.555085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.336 [2024-07-15 11:45:04.555621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-07-15 11:45:04.555664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.336 [2024-07-15 11:45:04.555685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.336 [2024-07-15 11:45:04.556274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.336 [2024-07-15 11:45:04.556724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.336 [2024-07-15 11:45:04.556735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.336 [2024-07-15 11:45:04.556744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.336 [2024-07-15 11:45:04.560992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.336 [2024-07-15 11:45:04.569768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.336 [2024-07-15 11:45:04.570306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-07-15 11:45:04.570348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.336 [2024-07-15 11:45:04.570369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.336 [2024-07-15 11:45:04.570945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.336 [2024-07-15 11:45:04.571536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.336 [2024-07-15 11:45:04.571569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.336 [2024-07-15 11:45:04.571578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.336 [2024-07-15 11:45:04.575820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.336 [2024-07-15 11:45:04.584338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.336 [2024-07-15 11:45:04.584865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-07-15 11:45:04.584886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.336 [2024-07-15 11:45:04.584896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.336 [2024-07-15 11:45:04.585160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.336 [2024-07-15 11:45:04.585432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.336 [2024-07-15 11:45:04.585444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.585453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.589703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.598996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.599582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.599625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.599646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.600224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.600560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.600573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.600582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.604821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.613590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.614118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.614139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.614149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.614420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.614685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.614696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.614705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.618952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.628216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.628752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.628794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.628815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.629406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.629736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.629747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.629756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.633992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.642760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.643296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.643317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.643327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.643594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.643860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.643871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.643880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.648127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.657402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.657914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.657956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.657977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.658537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.658802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.658813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.658822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.663062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.672086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.672636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.672658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.672669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.672932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.673197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.673208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.673217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.677473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.686739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.687269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.687290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.687300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.687564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.687828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.687839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.687853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.692106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.701374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.701960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.701981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.702573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.702868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.702880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.702889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.707176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.715937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.716459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.716481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.716491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.716754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.717019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.717030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.717039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.721287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.730554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.731102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.731144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.731166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.731767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.732032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.732044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.732053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.736304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.745336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.745869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.745894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.745904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.746168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.746438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.746449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.746459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.750698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.759968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.760515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.760558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.760579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.761035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.761304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.761316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.761325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.765566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.774590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.775115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.775136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.775147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.775417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.775682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.775694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.775703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.779953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.337 [2024-07-15 11:45:04.789223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.337 [2024-07-15 11:45:04.789756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-07-15 11:45:04.789798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.337 [2024-07-15 11:45:04.789820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.337 [2024-07-15 11:45:04.790410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.337 [2024-07-15 11:45:04.790999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.337 [2024-07-15 11:45:04.791022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.337 [2024-07-15 11:45:04.791041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.337 [2024-07-15 11:45:04.795335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.597 [2024-07-15 11:45:04.803884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.804348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.804371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.804381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.804645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.804909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.804921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.804930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.809175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.818450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.818983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.819024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.819046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.819637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.820060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.820071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.820081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.824320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.833081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.833650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.833693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.833714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.834171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.834444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.834455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.834464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.838711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.847731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.848273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.848315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.848337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.848911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.849175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.849187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.849196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.853449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.862472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.862998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.863019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.863029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.863301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.863566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.863577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.863587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.867831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.877092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.877544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.877566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.877576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.877840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.878104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.878115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.878124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.882373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.891642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.892168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.892189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.892207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.892477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.892742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.892753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.892762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.897008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.906271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.906799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.906820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.906830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.907094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.907365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.907377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.907386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.911626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.920894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.921421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.921470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.921491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.922068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.922440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.922453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.922462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.926706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 [2024-07-15 11:45:04.935471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.936002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.936043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.936064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.598 [2024-07-15 11:45:04.936542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.598 [2024-07-15 11:45:04.936813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.598 [2024-07-15 11:45:04.936828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.598 [2024-07-15 11:45:04.936838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.598 [2024-07-15 11:45:04.941074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2964018 Killed "${NVMF_APP[@]}" "$@" 00:29:30.598 11:45:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:30.598 11:45:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:30.598 11:45:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:30.598 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:30.598 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.598 [2024-07-15 11:45:04.950090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.598 [2024-07-15 11:45:04.950639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-07-15 11:45:04.950660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-07-15 11:45:04.950670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:04.950933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2965654 00:29:30.599 [2024-07-15 11:45:04.951198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:04.951209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:04.951218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2965654 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2965654 ']' 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.599 11:45:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.599 [2024-07-15 11:45:04.955471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:04.964750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:04.965301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:04.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:04.965333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:04.965597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:04.965861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:04.965877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:04.965886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:04.970136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:04.979413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:04.979965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:04.979986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:04.979996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:04.980268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:04.980534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:04.980545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:04.980554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:04.984803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:04.994090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:04.994624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:04.994646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:04.994656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:04.994920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:04.995185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:04.995196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:04.995205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:04.999457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:05.002287] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:30.599 [2024-07-15 11:45:05.002347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.599 [2024-07-15 11:45:05.008732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:05.009201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:05.009222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:05.009232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:05.009525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:05.009793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:05.009805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:05.009819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:05.014075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:05.023475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:05.024028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:05.024050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:05.024061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:05.024368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:05.024635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:05.024647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:05.024656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:05.028909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:05.038190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:05.038744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:05.038766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:05.038776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:05.039041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:05.039313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:05.039324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:05.039334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.599 [2024-07-15 11:45:05.043577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.599 [2024-07-15 11:45:05.052858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.599 [2024-07-15 11:45:05.053413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.599 [2024-07-15 11:45:05.053435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.599 [2024-07-15 11:45:05.053445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.599 [2024-07-15 11:45:05.053710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.599 [2024-07-15 11:45:05.053975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.599 [2024-07-15 11:45:05.053986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.599 [2024-07-15 11:45:05.053995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.599 [2024-07-15 11:45:05.058250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.067539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.068070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.068090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.068100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.068370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.068635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.068646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.068656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.072906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.082180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.082739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.082761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.082771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.083034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.083304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.083316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.083325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.087570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.091365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.860 [2024-07-15 11:45:05.096869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.097425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.097448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.097458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.097722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.097987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.097999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.098008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.102268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.111548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.112057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.112079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.112090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.112362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.112629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.112640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.112650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.116893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.126159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.126745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.126767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.126777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.127042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.127313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.127326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.127335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.131581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.140860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.141393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.141414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.141425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.141688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.141954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.141965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.141974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.146229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.155517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.156011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.156034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.156044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.156314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.156580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.156592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.156607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.160849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.170145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.170616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.170638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.170648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.170912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.171178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.171190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.171199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.175460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.184735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.185292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-07-15 11:45:05.185314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-07-15 11:45:05.185324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.860 [2024-07-15 11:45:05.185589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.860 [2024-07-15 11:45:05.185854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.860 [2024-07-15 11:45:05.185866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.860 [2024-07-15 11:45:05.185875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.860 [2024-07-15 11:45:05.190128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.860 [2024-07-15 11:45:05.198028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.860 [2024-07-15 11:45:05.198066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.860 [2024-07-15 11:45:05.198079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.860 [2024-07-15 11:45:05.198090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.860 [2024-07-15 11:45:05.198099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.860 [2024-07-15 11:45:05.198162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.860 [2024-07-15 11:45:05.198295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.860 [2024-07-15 11:45:05.198297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.860 [2024-07-15 11:45:05.199414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.860 [2024-07-15 11:45:05.199968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.199989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.200000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.200275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.200541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.200552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.200562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.204810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.214092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.214581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.214604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.214615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.214879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.215144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.215155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.215165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.219459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.228739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.229310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.229344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.229609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.229873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.229884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.229894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.234146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.243438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.243975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.243997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.244008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.244278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.244543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.244555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.244571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.248822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.258106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.258667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.258690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.258701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.258965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.259231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.259242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.259252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.263503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.272776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.273281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.273303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.273313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.273578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.273842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.273854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.273863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.278108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.287380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.287909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.287931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.287941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.288204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.288475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.288487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.288496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.292745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.302018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.302591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.302611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.302622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.302886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.303149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.303160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.303169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.307420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.861 [2024-07-15 11:45:05.316698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.861 [2024-07-15 11:45:05.317261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-07-15 11:45:05.317282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:30.861 [2024-07-15 11:45:05.317292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:30.861 [2024-07-15 11:45:05.317555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:30.861 [2024-07-15 11:45:05.317820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.861 [2024-07-15 11:45:05.317831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.861 [2024-07-15 11:45:05.317840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.861 [2024-07-15 11:45:05.322093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.331375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.331928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.331948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.331958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.332223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.332495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.332508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.332517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.336767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.346033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.346595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.346616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.346626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.346893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.347158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.347169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.347178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.351433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.360707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.361235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.361261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.361271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.361535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.361799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.361810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.361819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.366068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.375342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.375898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.375919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.375929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.376193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.376465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.376477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.376487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.380728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.390002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.390458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.390479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.390489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.390752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.391017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.391028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.391041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.395295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.404571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.405125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.405146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.405156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.405426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.405691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.405702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.405712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.409961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.419231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.419790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.419810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.419821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.420085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.420354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.420366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.420375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.424613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.433919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.434475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.434497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.434507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.434773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.435038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.435049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.435058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.439308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.123 [2024-07-15 11:45:05.448586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.123 [2024-07-15 11:45:05.449147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.123 [2024-07-15 11:45:05.449172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.123 [2024-07-15 11:45:05.449182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.123 [2024-07-15 11:45:05.449453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.123 [2024-07-15 11:45:05.449717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.123 [2024-07-15 11:45:05.449728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.123 [2024-07-15 11:45:05.449737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.123 [2024-07-15 11:45:05.453982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.463261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.463708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.463728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.463738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.464001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.464269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.464281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.464290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.468532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.478052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.478622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.478654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.478918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.479183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.479194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.479203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.483454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.492728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.493288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.493310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.493320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.493585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.493854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.493865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.493874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.498116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.507398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.507944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.507966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.507976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.508240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.508512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.508524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.508533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.512783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.522056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.522613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.522635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.522645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.522909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.523173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.523184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.523194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.527452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.536730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.537259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.537281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.537291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.537554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.537819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.537830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.537840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.542092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.551388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.551793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.551814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.551825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.552089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.552360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.552372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.552382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.556641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.566177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.566661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.566682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.566692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.566957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.567223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.567235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.567244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.124 [2024-07-15 11:45:05.571500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.124 [2024-07-15 11:45:05.580775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.124 [2024-07-15 11:45:05.581240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.124 [2024-07-15 11:45:05.581268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.124 [2024-07-15 11:45:05.581279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.124 [2024-07-15 11:45:05.581543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.124 [2024-07-15 11:45:05.581809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.124 [2024-07-15 11:45:05.581820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.124 [2024-07-15 11:45:05.581830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.384 [2024-07-15 11:45:05.586088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.384 [2024-07-15 11:45:05.595389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.384 [2024-07-15 11:45:05.595836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.595857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.595875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.596138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.596410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.596422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.596432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.600690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.609977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.610514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.610536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.610546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.610810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.611076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.611087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.611097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.615352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.624629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.625177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.625198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.625208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.625480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.625746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.625757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.625767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.630013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.639322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.639880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.639903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.639913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.640178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.640449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.640465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.640475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.644722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.654016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.654548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.654569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.654580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.654844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.655108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.655121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.655130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.659386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.668682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.669084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.669105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.669115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.669385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.669651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.669662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.669671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.673914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.683445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.683973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.683994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.684004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.684272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.684537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.684548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.684557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.688802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.698095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.698480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.698502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.698513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.698777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.699044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.699055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.699065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.703318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.712841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.713318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.713339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.713349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.713613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.713877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.713889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.713898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.718149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.727441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.727965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.727986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.727996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.728445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.728712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.728723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.728733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.732978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.385 [2024-07-15 11:45:05.742013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.385 [2024-07-15 11:45:05.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:45:05.742540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.385 [2024-07-15 11:45:05.742550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.385 [2024-07-15 11:45:05.742818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.385 [2024-07-15 11:45:05.743084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.385 [2024-07-15 11:45:05.743095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.385 [2024-07-15 11:45:05.743104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.385 [2024-07-15 11:45:05.747367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.756653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.757125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.757147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.757157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.757426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.757695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.757706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.757716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.761969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.771258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.771732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.771753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.771763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.772028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.772299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.772311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.772320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.776565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.785852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.786352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.786375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.786385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.786649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.786914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.786925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.786938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.791199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.800494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.801002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.801023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.801032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.801304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.801568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.801580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.801589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.805837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.815113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.815590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.815611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.815621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.815884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.816149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.816160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.816169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.820422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.829699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.830150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.830171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.830180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.830451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.830716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.830728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.830737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.386 [2024-07-15 11:45:05.834986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.386 [2024-07-15 11:45:05.844295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.386 [2024-07-15 11:45:05.844685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:45:05.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.386 [2024-07-15 11:45:05.844716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.386 [2024-07-15 11:45:05.844979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.386 [2024-07-15 11:45:05.845243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.386 [2024-07-15 11:45:05.845262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.386 [2024-07-15 11:45:05.845272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.646 [2024-07-15 11:45:05.849524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.646 [2024-07-15 11:45:05.859060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.646 [2024-07-15 11:45:05.859544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.646 [2024-07-15 11:45:05.859565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.646 [2024-07-15 11:45:05.859575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.646 [2024-07-15 11:45:05.859839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.646 [2024-07-15 11:45:05.860103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.646 [2024-07-15 11:45:05.860115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.646 [2024-07-15 11:45:05.860124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.646 [2024-07-15 11:45:05.864379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.646 [2024-07-15 11:45:05.873657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.646 [2024-07-15 11:45:05.874092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.646 [2024-07-15 11:45:05.874112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.646 [2024-07-15 11:45:05.874122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.874392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.874657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.874668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.874678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.878926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.888209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.888710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.888731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.888742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.889006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.889283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.889296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.889305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.893564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.902845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.903324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.903346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.903357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.903622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.903888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.903900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.903909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.908164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.917444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.917968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.917989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.917999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.918271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.918536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.918548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.918558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.922801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.932075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.932567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.932588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.932598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.932862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.933127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.933139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.933148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.937409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.946688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.947200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.947221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.947231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.947502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.947767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.947778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.947788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.952037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.961318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.961842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.961863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.961873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.962136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.647 [2024-07-15 11:45:05.962408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.962420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.962430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.647 [2024-07-15 11:45:05.966683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.975971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.976463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.976485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.976495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.976759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.977025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.977036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.977046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.981308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 [2024-07-15 11:45:05.990602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:05.991057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:05.991078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:05.991087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:05.991357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:05.991623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:05.991635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:05.991644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:05.995891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.647 11:45:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.647 [2024-07-15 11:45:06.002886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.647 [2024-07-15 11:45:06.005171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:06.005657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.647 [2024-07-15 11:45:06.005678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.647 [2024-07-15 11:45:06.005689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.647 [2024-07-15 11:45:06.005953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.647 [2024-07-15 11:45:06.006217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.647 [2024-07-15 11:45:06.006228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.647 [2024-07-15 11:45:06.006237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.647 [2024-07-15 11:45:06.010484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.647 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.647 11:45:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.647 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.647 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.647 [2024-07-15 11:45:06.019772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.647 [2024-07-15 11:45:06.020223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.648 [2024-07-15 11:45:06.020245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.648 [2024-07-15 11:45:06.020260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.648 [2024-07-15 11:45:06.020525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.648 [2024-07-15 11:45:06.020793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.648 [2024-07-15 11:45:06.020805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.648 [2024-07-15 11:45:06.020815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.648 [2024-07-15 11:45:06.025058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.648 [2024-07-15 11:45:06.034338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.648 [2024-07-15 11:45:06.034861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.648 [2024-07-15 11:45:06.034882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.648 [2024-07-15 11:45:06.034892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.648 [2024-07-15 11:45:06.035155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.648 [2024-07-15 11:45:06.035427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.648 [2024-07-15 11:45:06.035439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.648 [2024-07-15 11:45:06.035448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.648 [2024-07-15 11:45:06.039698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.648 [2024-07-15 11:45:06.048893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.648 [2024-07-15 11:45:06.049383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.648 [2024-07-15 11:45:06.049409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.648 [2024-07-15 11:45:06.049420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.648 [2024-07-15 11:45:06.049686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.648 [2024-07-15 11:45:06.049951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.648 [2024-07-15 11:45:06.049963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.648 [2024-07-15 11:45:06.049973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.648 [2024-07-15 11:45:06.054220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.648 Malloc0 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 [2024-07-15 11:45:06.063507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.648 [2024-07-15 11:45:06.064050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.648 [2024-07-15 11:45:06.064072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7e90 with addr=10.0.0.2, port=4420 00:29:31.648 [2024-07-15 11:45:06.064082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7e90 is same with the state(5) to be set 00:29:31.648 [2024-07-15 11:45:06.064352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7e90 (9): Bad file descriptor 00:29:31.648 [2024-07-15 11:45:06.064623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.648 [2024-07-15 11:45:06.064634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.648 [2024-07-15 11:45:06.064644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 [2024-07-15 11:45:06.068888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.648 [2024-07-15 11:45:06.078161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.648 [2024-07-15 11:45:06.078404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.648 11:45:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2964475 00:29:31.907 [2024-07-15 11:45:06.162045] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:40.026 00:29:40.026 Latency(us) 00:29:40.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.026 Verification LBA range: start 0x0 length 0x4000 00:29:40.026 Nvme1n1 : 15.03 3160.08 12.34 8572.92 0.00 10878.38 953.25 37891.72 00:29:40.026 =================================================================================================================== 00:29:40.026 Total : 3160.08 12.34 8572.92 0.00 10878.38 953.25 37891.72 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.285 rmmod nvme_tcp 00:29:40.285 rmmod nvme_fabrics 00:29:40.285 rmmod nvme_keyring 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2965654 ']' 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2965654 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2965654 ']' 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2965654 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2965654 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2965654' 00:29:40.285 killing process with pid 2965654 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2965654 00:29:40.285 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2965654 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.544 11:45:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.082 11:45:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:43.082 00:29:43.082 real 0m26.155s 00:29:43.082 user 1m2.563s 00:29:43.082 sys 0m6.345s 00:29:43.082 11:45:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.082 11:45:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.082 ************************************ 00:29:43.082 END TEST nvmf_bdevperf 00:29:43.082 ************************************ 00:29:43.082 11:45:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:43.082 11:45:17 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:43.082 11:45:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:43.082 11:45:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.082 11:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.082 ************************************ 00:29:43.082 START TEST nvmf_target_disconnect 00:29:43.082 ************************************ 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:43.082 * Looking for test storage... 00:29:43.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:43.082 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:43.083 11:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.654 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:49.654 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:49.655 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:49.655 Found net devices under 0000:af:00.0: cvl_0_0 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:49.655 Found net devices under 0000:af:00.1: cvl_0_1 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.655 11:45:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:29:49.655 00:29:49.655 --- 10.0.0.2 ping statistics --- 00:29:49.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.655 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:29:49.655 00:29:49.655 --- 10.0.0.1 ping statistics --- 00:29:49.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.655 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:49.655 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 ************************************ 00:29:49.656 START TEST nvmf_target_disconnect_tc1 00:29:49.656 ************************************ 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.656 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.656 [2024-07-15 11:45:23.328387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.656 [2024-07-15 11:45:23.328441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130ecf0 with addr=10.0.0.2, port=4420 00:29:49.656 [2024-07-15 11:45:23.328470] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:49.656 [2024-07-15 11:45:23.328487] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:49.656 [2024-07-15 11:45:23.328495] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:49.656 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:49.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:49.656 Initializing NVMe Controllers 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:49.656 00:29:49.656 real 0m0.126s 00:29:49.656 user 0m0.048s 00:29:49.656 sys 0m0.077s 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 ************************************ 00:29:49.656 END TEST nvmf_target_disconnect_tc1 00:29:49.656 ************************************ 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 ************************************ 00:29:49.656 START TEST nvmf_target_disconnect_tc2 00:29:49.656 ************************************ 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2971383 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2971383 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2971383 ']' 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.656 11:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.656 [2024-07-15 11:45:23.467862] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:49.656 [2024-07-15 11:45:23.467914] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.656 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.656 [2024-07-15 11:45:23.584538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.656 [2024-07-15 11:45:23.732337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.656 [2024-07-15 11:45:23.732403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.656 [2024-07-15 11:45:23.732430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.656 [2024-07-15 11:45:23.732449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.657 [2024-07-15 11:45:23.732464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.657 [2024-07-15 11:45:23.732600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:49.657 [2024-07-15 11:45:23.732712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:49.657 [2024-07-15 11:45:23.732824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:49.657 [2024-07-15 11:45:23.732829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 Malloc0 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 [2024-07-15 11:45:24.484591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 [2024-07-15 11:45:24.517163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2971664 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:50.257 11:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.257 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.166 11:45:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2971383 00:29:52.166 11:45:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Read completed with error (sct=0, sc=8) 00:29:52.166 starting I/O failed 00:29:52.166 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 [2024-07-15 11:45:26.551749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 [2024-07-15 11:45:26.552056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 [2024-07-15 11:45:26.552656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Read completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.167 Write completed with error (sct=0, sc=8) 00:29:52.167 starting I/O failed 00:29:52.168 Write completed with error (sct=0, sc=8) 00:29:52.168 starting I/O failed 00:29:52.168 [2024-07-15 11:45:26.553020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.168 [2024-07-15 11:45:26.553354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.553402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.553648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.553681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.553893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.553924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.554188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.554218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.554417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.554441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.554632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.554649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.554811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.554842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.554995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.555041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.555328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.555360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.555504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.555535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.555697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.555728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.555949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.555979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.556266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.556299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.556522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.556552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.556759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.556790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.556990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.557021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.557226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.557266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.557464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.557495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.557722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.557752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.557954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.557985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.558296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.558337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.558593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.558609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.558863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.558879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.559050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.559065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.559241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.559281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.559605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.559636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.559917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.559947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.560239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.560279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.560563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.560593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.560906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.560936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.561208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.561239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.561499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.561531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.561840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.561871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.562166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.562196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.562503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.562535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.562750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.562781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.563048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.168 [2024-07-15 11:45:26.563078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.168 qpair failed and we were unable to recover it. 00:29:52.168 [2024-07-15 11:45:26.563396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.563429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.563689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.563720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.564025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.564055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.564337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.564370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.566289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.566590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.566624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.566909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.566942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.567232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.567275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.567471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.567502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.567734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.567765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.568024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.568055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.568325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.568358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.568606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.568638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.568891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.568922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.569147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.569177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.569442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.569475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.569778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.569808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.570116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.570146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.570436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.570468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.570724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.570755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.570893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.570924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.571180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.571211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.571515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.571548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.571810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.571840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.572064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.572094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.572374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.572407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.572666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.572696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.572928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.572959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.573165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.573196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.573506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.573539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.573741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.573772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.574057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.574088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.574402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.574437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.574646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.574677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.574979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.575010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.575280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.575313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.575534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.575566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.575775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.575806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.576087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.576123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.169 [2024-07-15 11:45:26.576362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.169 [2024-07-15 11:45:26.576394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.169 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.576584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.576615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.576813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.576844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.577107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.577138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.577426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.577459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.577691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.577723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.577948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.577978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.578288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.578322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.578577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.578608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.578864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.578895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.579209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.579240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.579489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.579740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.579771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.580132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.580163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.580396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.580737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.580768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.581057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.581088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.581403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.581435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.581626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.581658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.581917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.581948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.582136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.582167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.582357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.582390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.582562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.582593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.582800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.582830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.583105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.583136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.583418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.583451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.583741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.584059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.584090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.584347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.584379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.584581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.584612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.584815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.584846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.585115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.585146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.585386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.585418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.585706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.585738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.586031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.586061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.586339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.586370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.586634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.586665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.586872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.170 [2024-07-15 11:45:26.586903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.170 qpair failed and we were unable to recover it. 00:29:52.170 [2024-07-15 11:45:26.587159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.587456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.587489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.587802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.587834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.588040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.588070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.588352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.588384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.588584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.588616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.588874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.588905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.589112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.589143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.589350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.589382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.589581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.589611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.589816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.589847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.590065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.590097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.590341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.590373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.590516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.590547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.590761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.590792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.590943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.590974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.591275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.591307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.591586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.591617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.591909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.591940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.592230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.592268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.592495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.592526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.592746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.592777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.592969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.593315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.593347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.593614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.593645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.593905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.593937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.594219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.594251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.594545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.594577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.594813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.594845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.595136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.595168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.171 qpair failed and we were unable to recover it. 00:29:52.171 [2024-07-15 11:45:26.595451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.171 [2024-07-15 11:45:26.595484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.595764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.595795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.596103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.596134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.596405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.596436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.596727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.596757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.597052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.597083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.597395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.597427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.597693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.597725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.597947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.597979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.598231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.598271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.598584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.598615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.598897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.598928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.599133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.599164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.599389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.599422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.599711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.599741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.599958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.599990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.600283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.600315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.600602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.600633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.600759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.600790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.601047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.601079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.601274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.601306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.601591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.601623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.601902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.601934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.602234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.602289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.602579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.602610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.602818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.602850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.603170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.603206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.603479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.603512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.603746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.603777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.604058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.604089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.604293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.604327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.604586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.604616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.604899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.604930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.605213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.605244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.605443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.605474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.605701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.605731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.606023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.606054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.606345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.606377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.606622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.172 [2024-07-15 11:45:26.606653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.172 qpair failed and we were unable to recover it. 00:29:52.172 [2024-07-15 11:45:26.606918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.606948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.607173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.607204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.607506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.607539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.607823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.607853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.608115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.608145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.608369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.608403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.608689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.608720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.609032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.609063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.609339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.609372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.609560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.609590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.609798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.609829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.610090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.610121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.610325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.610358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.610613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.610645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.610920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.610957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.611249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.611290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.611571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.611603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.611893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.612217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.612248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.612539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.612571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.612768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.612799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.613129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.613160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.613439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.613473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.613707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.613738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.614002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.614033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.614243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.614285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.614424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.614455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.614761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.614792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.614986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.615018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.615280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.615313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.615600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.615631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.615857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.615888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.616080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.616111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.616399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.616431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.616637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.616668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.616803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.616834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.617043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.617074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.617296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.617330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.617550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.617581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.617842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.617874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.618011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.173 [2024-07-15 11:45:26.618041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.173 qpair failed and we were unable to recover it. 00:29:52.173 [2024-07-15 11:45:26.618252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.618314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.618628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.618660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.618816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.618847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.619160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.619190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.619401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.619433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.619644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.619675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.619995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.620026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.620272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.620305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.620621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.620652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.620847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.620878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.621194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.621225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.621459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.621491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.621791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.621821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.622071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.622102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.622328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.622363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.622623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.622673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.622972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.623002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.623273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.623306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.623528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.623559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.623812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.623844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.624037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.624068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.174 [2024-07-15 11:45:26.624359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.174 [2024-07-15 11:45:26.624391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.174 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.624606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.624640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.624926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.624958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.625173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.625204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.625438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.625471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.625610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.625640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.625858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.625890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.626116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.626147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.626379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.626412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.626602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.626644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.626957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.626989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.627282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.627314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.627608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.627640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.627901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.627932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.628219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.628250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.628616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.628648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.628945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.628977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.629273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.629306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.629459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.629491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.629782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.629813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.630110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.630142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.630430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.630463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.630685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.630716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.630911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.630943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.631230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.631270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.447 [2024-07-15 11:45:26.631551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.447 [2024-07-15 11:45:26.631583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.447 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.631804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.631835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.632059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.632090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.632283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.632315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.632508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.632539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.632832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.632863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.633146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.633441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.633474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.633756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.633788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.634022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.634053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.634251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.634297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.634531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.634562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.634796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.634827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.635029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.635060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.635353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.635387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.635677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.635708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.635997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.636028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.636343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.636375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.636686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.636718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.636990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.637021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.637327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.637360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.637640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.637671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.637962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.638214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.638245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.638535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.638569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.638828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.638859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.639153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.639184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.639475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.639509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.639732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.639763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.639985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.640017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.640320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.640358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.640646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.640678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.640996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.641028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.641149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.641180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.641459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.641492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.641706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.641738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.641957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.641989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.642296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.642330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.642611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.642643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.642778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.642810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.643076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.643107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.643391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.643425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.643651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.643683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.643824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.643856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.644142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.644174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.644469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.644501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.644788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.644819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.645044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.645076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.645342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.645375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.645511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.645549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.645840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.645871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.646061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.646093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.646429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.646462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.646757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.646789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.646993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.647025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.647291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.647324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.647590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.647622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.647908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.647939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.448 [2024-07-15 11:45:26.648239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.448 [2024-07-15 11:45:26.648281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.448 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.648579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.648611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.648888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.648919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.649217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.649249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.649536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.649567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.649867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.649899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.650121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.650153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.650445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.650479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.650771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.650803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.651052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.651083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.651403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.651437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.651733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.651764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.651977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.652008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.652301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.652334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.652624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.652655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.652920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.652951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.653263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.653297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.653567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.653599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.653876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.653907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.654219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.654251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.654545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.654577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.654770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.654801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.655081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.655113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.655444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.655478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.655770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.655802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.656091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.656122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.656331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.656364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.656590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.656621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.656832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.656863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.657189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.657221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.657472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.657706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.657738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.657939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.657971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.658166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.658197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.658484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.658517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.658787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.658818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.659087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.659119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.659420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.659454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.659682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.659714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.659994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.660026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.660290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.660324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.660632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.660663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.660941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.660973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.661242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.661297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.661590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.661622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.661894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.661925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.662061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.662092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.662291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.662323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.662591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.662623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.662818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.662849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.663139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.663171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.663339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.663373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.663571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.663602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.663912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.663945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.664154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.664187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.664414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.664719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.664750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.665019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.665050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.665353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.665386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.665670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.665907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.665938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.666209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.666242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.449 [2024-07-15 11:45:26.666546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.449 [2024-07-15 11:45:26.666579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.449 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.666786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.666818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.667147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.667179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.667390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.667424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.667693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.667725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.668021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.668054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.668345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.668379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.668529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.668561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.668855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.669050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.669083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.669359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.669392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.669698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.669731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.670019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.670051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.670247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.670288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.670558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.670590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.670887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.670920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.671213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.671245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.671461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.671493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.671760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.671792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.672069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.672101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.672424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.672478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.672722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.672753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.673078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.673109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.673345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.673378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.673706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.673743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.673960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.673992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.674197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.674229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.674549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.674582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.674880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.674913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.675201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.675233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.675572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.675606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.675947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.675978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.676277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.676311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.676520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.676551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.676706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.676738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.676963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.676995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.677219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.677251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.677571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.677604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.677829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.677861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.678080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.678112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.678386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.678420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.678718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.679048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.679081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.679369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.679402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.679701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.679733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.680040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.680072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.680355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.680389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.450 qpair failed and we were unable to recover it. 00:29:52.450 [2024-07-15 11:45:26.680691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.450 [2024-07-15 11:45:26.680724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.681010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.681043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.681242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.681285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.681584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.681616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.681890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.681933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.682230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.682287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.682562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.682594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.682884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.682915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.683058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.683091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.683360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.683393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.683673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.683705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.683925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.683958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.684175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.684207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.684540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.684573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.684850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.684882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.685117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.685150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.685458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.685492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.685775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.685807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.686028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.686060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.686321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.686355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.686558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.686590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.686913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.686945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.687240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.687282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.687568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.687600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.687916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.687949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.688177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.688208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.688532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.688565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.688817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.688849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.688998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.689030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.689313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.689347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.689675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.689706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.689954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.689986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.690230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.690283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.690434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.690467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.690693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.690726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.691035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.691067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.691349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.691383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.691585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.691616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.691887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.691918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.692229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.692269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.692500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.692532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.692744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.692776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.693047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.693078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.693349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.693383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.693690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.693722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.694002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.694036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.694310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.694344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.694654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.694686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.694987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.695019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.695307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.695340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.695613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.695645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.695868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.695901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.696199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.696231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.696521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.696554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.696853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.696885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.697178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.697210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.697504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.697538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.697832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.697863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.698156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.698188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.698373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.698407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.698706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.698738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.698891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.698923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.699197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.699228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.699455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.699489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.699711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.699742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.700046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.451 [2024-07-15 11:45:26.700078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.451 qpair failed and we were unable to recover it. 00:29:52.451 [2024-07-15 11:45:26.700279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.700314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.700586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.700618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.700909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.700942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.701163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.701195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.701407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.701441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.701659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.701691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.702027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.702064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.702350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.702384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.702681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.702714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.703039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.703070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.703284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.703317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.703541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.703574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.703736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.703767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.704008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.704039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.704237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.704279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.704551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.704582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.704801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.704833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.705093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.705125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.705444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.705478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.705643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.705676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.705956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.705988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.706294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.706328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.706471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.706502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.706773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.706805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.707003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.707035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.707346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.707656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.707687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.707950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.707982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.708179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.708210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.708510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.708543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.708823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.708855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.709130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.709161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.709409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.709443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.709767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.709804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.709947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.709979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.710213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.710245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.710470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.710503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.710797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.710829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.711121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.711154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.711369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.711403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.711616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.711648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.711976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.712009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.712307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.712341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.712629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.712660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.712963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.712994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.713274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.713308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.713613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.713645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.713933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.713965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.714298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.714333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.714637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.714670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.714970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.715002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.715246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.715289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.715513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.715545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.715758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.715790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.716036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.716068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.716342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.716375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.716679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.716710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.716947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.716980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.717279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.717312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.717522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.717554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.717856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.717888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.718036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.718069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.718280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.718314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.452 [2024-07-15 11:45:26.718649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.452 [2024-07-15 11:45:26.718681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.452 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.718958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.718990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.719195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.719228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.719514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.719547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.719873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.719905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.720181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.720213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.720529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.720564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.720864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.720897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.721180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.721212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.721420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.721453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.721704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.721737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.721942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.721974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.722282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.722316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.722462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.722494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.722649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.722681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.722899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.722932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.723231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.723288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.723617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.723648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.723859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.723891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.724198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.724229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.724570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.724603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.724804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.724835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.725036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.725068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.725366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.725399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.725710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.725742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.726060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.726092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.726349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.726383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.726632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.726664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.726879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.726910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.727153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.727185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.727485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.727518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.727810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.727841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.728114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.728145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.728375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.728409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.728715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.728747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.729055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.729087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.729409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.729443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.729712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.729743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.730048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.730085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.730375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.730410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.730703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.730735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.731029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.731061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.731356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.731389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.731683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.731715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.732010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.732042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.732336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.732370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.732661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.732936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.732968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.733240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.733283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.733494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.733528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.733732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.733765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.734034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.734066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.734414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.734448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.734749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.734781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.735001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.735032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.735276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.453 [2024-07-15 11:45:26.735311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.453 qpair failed and we were unable to recover it. 00:29:52.453 [2024-07-15 11:45:26.735602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.735634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.735911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.735943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.736149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.736181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.736454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.736489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.736768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.736800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.737027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.737059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.737266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.737301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.737640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.737672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.737825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.737857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.738097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.738134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.738455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.738490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.738781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.738815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.739110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.739438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.739471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.739771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.739802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.740044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.740077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.740291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.740324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.740622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.740653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.740870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.740902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.741124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.741157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.741452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.741486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.741691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.741722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.742024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.742057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.742347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.742381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.742633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.742664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.742868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.742899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.743043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.743075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.743279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.743313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.743640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.743672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.743969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.744000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.744332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.744366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.744693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.744724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.744926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.744957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.745236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.745580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.745612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.745929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.745962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.746210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.746569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.746601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.746858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.746890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.747175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.747207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.747516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.747550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.747828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.747860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.748199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.748231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.748585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.748619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.748854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.748885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.749165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.749197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.749407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.749441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.749743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.749775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.750075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.750108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.750395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.750429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.750640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.750672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.750867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.750899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.751197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.751228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.751464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.751497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.751739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.751772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.752063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.752095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.752392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.752426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.752716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.752747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.753052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.753085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.454 qpair failed and we were unable to recover it. 00:29:52.454 [2024-07-15 11:45:26.753353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.454 [2024-07-15 11:45:26.753388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.753708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.753741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.754041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.754073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.754382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.754416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.754710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.754743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.755038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.755069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.755366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.755400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.755690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.755723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.755870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.755903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.756174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.756206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.756491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.756523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.756729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.756761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.757015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.757047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.757348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.757381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.757666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.757698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.757917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.757949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.758242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.758285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.758571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.758603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.758894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.758927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.759226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.759269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.759579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.759610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.759911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.759942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.760234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.760279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.760565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.760598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.760830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.760862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.761172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.761204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.761534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.761568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.761863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.761895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.762223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.762267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.762542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.762575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.762793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.762825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.763128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.763161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.763448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.763483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.763680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.763713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.763872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.763903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.764110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.764143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.764442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.764477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.764764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.764796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.765099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.765132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.765367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.765399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.765679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.765711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.765929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.765962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.766109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.766140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.766451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.766484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.766783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.766815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.767136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.767173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.767451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.767484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.767758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.767790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.768104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.768136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.768436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.768469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.768596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.768627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.768949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.768982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.769230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.769282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.769587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.769620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.769923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.769955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.770182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.770214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.770425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.770458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.770782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.770814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.771085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.771117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.771324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.771358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.771579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.771612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.771821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.771853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.772155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.455 [2024-07-15 11:45:26.772188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.455 qpair failed and we were unable to recover it. 00:29:52.455 [2024-07-15 11:45:26.772390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.772423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.772634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.772667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.772940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.772973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.773249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.773291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.773593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.773626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.773901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.773933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.774077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.774109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.774334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.774368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.774590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.774621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.774936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.774974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.775276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.775310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.775570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.775602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.775911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.775942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.776251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.776296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.776584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.776620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.776768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.776800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.777006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.777039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.777298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.777333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.777626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.777658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.777959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.777991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.778291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.778325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.778613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.778645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.778957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.778989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.779291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.779326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.779611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.779643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.779938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.779971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.780274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.780307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.780527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.780559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.780776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.780808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.781019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.781052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.781382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.781416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.781722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.781754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.782031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.782063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.782382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.782416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.782710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.782743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.783018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.783052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.783205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.783236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.783529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.783562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.783788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.783821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.784095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.784128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.784446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.784481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.784684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.784717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.784871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.785175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.785208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.785526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.785561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.785877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.785911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.786125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.786157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.786370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.786404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.786634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.786666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.786935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.787189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.787222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.787456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.787488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.787786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.787819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.788108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.788140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.788345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.788378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.788650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.788682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.456 [2024-07-15 11:45:26.788994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.456 [2024-07-15 11:45:26.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.456 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.789307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.789341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.789642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.789676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.789886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.789917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.790117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.790149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.790349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.790384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.790684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.790715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.790885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.790916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.791225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.791267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.791432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.791464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.791736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.791768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.791932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.791964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.792281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.792314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.792464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.792497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.792772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.792804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.793095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.793127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.793391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.793425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.793636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.793668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.793990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.794022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.794322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.794356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.794594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.794626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.794877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.794915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.795131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.795163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.795457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.795491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.795634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.795668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.795944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.795976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.796172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.796204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.796494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.796528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.796823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.796855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.797053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.797084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.797284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.797318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.797612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.797644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.798010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.798043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.798293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.798327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.798496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.798530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.798743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.798776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.799007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.799313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.799348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.799623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.799654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.799912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.799945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.800169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.800202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.800502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.800536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.800825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.800857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.801105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.801136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.801445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.801478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.801763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.801795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.802096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.802128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.802460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.802494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.802734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.802777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.803104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.803136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.803464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.803498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.803741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.803774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.804063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.804094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.804409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.804442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.804696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.804729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.804894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.804927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.805070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.805103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.805353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.805389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.805613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.805646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.805865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.805897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.806096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.806128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.806344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.806377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.457 [2024-07-15 11:45:26.806595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.457 [2024-07-15 11:45:26.806627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.457 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.806966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.806998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.807212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.807245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.807596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.807629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.807962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.807994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.808243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.808288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.808615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.808647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.808921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.808952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.809109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.809141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.809377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.809411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.809623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.809655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.809885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.809917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.810154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.810187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.810390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.810427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.810700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.810733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.810900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.810932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.811172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.811538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.811572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.811826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.811859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.811989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.812021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.812290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.812324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.812544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.812576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.812877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.812909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.813205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.813238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.813397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.813430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.813727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.813759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.814011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.814044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.814337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.814371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.814645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.814676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.815013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.815049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.815355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.815389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.815674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.815708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.816000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.816033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.816335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.816539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.816570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.816864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.816896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.817184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.817216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.817520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.817553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.817763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.817796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.818000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.818031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.818323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.818358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.818584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.818617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.818825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.818857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.819152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.819185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.819486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.819809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.819841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.820139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.820171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.820444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.820477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.820617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.820649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.820817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.820848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.821003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.821034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.821238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.821283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.821661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.821693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.822039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.822070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.822376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.822410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.822632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.822664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.822960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.822992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.823288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.823321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.823595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.823627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.823907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.823939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.824281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.824315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.824616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.824648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.824844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.824878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.825097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.825360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.825394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.825596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.825628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.825834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.825866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.826154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.458 [2024-07-15 11:45:26.826186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.458 qpair failed and we were unable to recover it. 00:29:52.458 [2024-07-15 11:45:26.826442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.826476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.826698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.826730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.827030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.827061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.827341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.827374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.827699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.827731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.827987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.828019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.828348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.828381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.828616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.828648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.828856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.828888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.829176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.829207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.829382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.829415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.829708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.829740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.830040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.830071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.830280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.830526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.830557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.830827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.830859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.831100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.831133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.831346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.831380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.831674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.831705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.832038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.832071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.832371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.832404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.832691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.832724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.833006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.833038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.833275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.833308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.833530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.833561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.833795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.833828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.834101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.834133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.834367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.834400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.834647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.834680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.834926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.834957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.835272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.835307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.835609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.835642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.835968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.836001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.836217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.836249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.836479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.836511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.836721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.836753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.836983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.837015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.837287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.837321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.837525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.837557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.837834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.837866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.838077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.838114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.838431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.838464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.838608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.838641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.838874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.838906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.839211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.839242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.839550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.839584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.839833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.839865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.840062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.840094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.840312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.840346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.840502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.840533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.840863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.841166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.841198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.841483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.841516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.841742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.841774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.842085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.842117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.842399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.459 [2024-07-15 11:45:26.842433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.459 qpair failed and we were unable to recover it. 00:29:52.459 [2024-07-15 11:45:26.842595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.842627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.842876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.842908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.843182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.843214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.843437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.843470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.843775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.843806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.844007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.844039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.844347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.844381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.844680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.844712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.845031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.845063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.845356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.845389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.845692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.845724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.845957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.845989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.846137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.846169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.846403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.846437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.846736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.846768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.847056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.847089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.847417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.847451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.847724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.847755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.848081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.848114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.848359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.848398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.848634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.848667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.848959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.848991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.849290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.849324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.849530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.849563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.849835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.849867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.850078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.850110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.850415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.850449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.850675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.850707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.851056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.851088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.851329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.851362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.851665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.851698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.851941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.851974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.852280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.852314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.852619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.852650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.852919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.852950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.853268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.853302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.853603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.853635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.853956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.853988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.854304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.854565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.854598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.854881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.854913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.855223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.855280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.855438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.855470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.855774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.855806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.856120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.856153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.856481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.856723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.856755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.857056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.857088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.857399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.857432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.857754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.857786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.857936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.857968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.858205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.858236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.858567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.858606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.858929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.858960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.859253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.859299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.859584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.859616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.859818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.859851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.860046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.860078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.860308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.860672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.860704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.861008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.861040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.861311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.861345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.861548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.861580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.861903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.861935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.862073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.862104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.862413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.460 [2024-07-15 11:45:26.862446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.460 qpair failed and we were unable to recover it. 00:29:52.460 [2024-07-15 11:45:26.862660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.862692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.862851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.862883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.863089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.863122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.863448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.863483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.863772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.863805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.864026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.864057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.864283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.864318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.864566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.864599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.864874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.864906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.865183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.865216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.865466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.865500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.865799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.865830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.866073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.866105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.866237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.866288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.866513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.866545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.866864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.866896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.867170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.867202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.867453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.867487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.867715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.867747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.867969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.868000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.868327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.868362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.868502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.868534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.868830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.868862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.869176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.869208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.869502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.869536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.869828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.869861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.870152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.870185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.870488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.870521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.870749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.870782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.871006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.871038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.871321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.871354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.871663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.871696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.871895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.871927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.872143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.872175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.872498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.872533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.872812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.872845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.873146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.873177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.873391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.873425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.873569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.873602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.873753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.873785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.874109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.874146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.874393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.874427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.874718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.874751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.875045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.875076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.875240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.875282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.875568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.875601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.875808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.875841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.876062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.876094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.876306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.876340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.876543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.876575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.876728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.876759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.877056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.877089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.877382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.877416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.877620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.877652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.878033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.878111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.878445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.878484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.878712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.878744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.461 qpair failed and we were unable to recover it. 00:29:52.461 [2024-07-15 11:45:26.879022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.461 [2024-07-15 11:45:26.879054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.879267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.879301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.879602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.879633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.879839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.879871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.880085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.880117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.880327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.880361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.880661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.880692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.880898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.880930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.881227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.881270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.881484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.881516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.881764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.881805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.882132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.882164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.882385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.882418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.882563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.882596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.882874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.882908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.883055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.883087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.883406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.883440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.883668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.883700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.884008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.884041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.884344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.884378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.884600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.884632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.884933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.884969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.885171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.885204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.885488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.885522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.885668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.885701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.886023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.886055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.886278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.886313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.886517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.886550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.886821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.886853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.887085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.887117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.887413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.887446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.887719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.887751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.888029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.888062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.888220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.888581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.888615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.888912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.888944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.889231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.889274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.889570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.889603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.889828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.889861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.890176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.890209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.890385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.890420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.890669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.890703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.890920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.890954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.891168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.891201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.891496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.891530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.891740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.891774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.891933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.891967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.892348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.892384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.892721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.892755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.892921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.892956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.893196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.893507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.893543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.893858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.893893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.894175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.894208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.894511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.894544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.894808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.894840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.895124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.895157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.895381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.895414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.895773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.895805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.462 [2024-07-15 11:45:26.896056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.462 [2024-07-15 11:45:26.896088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.462 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.896378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.896414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.896620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.896654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.896962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.896995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.897200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.897232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.897469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.897501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.897661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.897693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.897992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.898187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.898219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.898493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.898527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.898821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.898853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.899147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.899180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.899475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.899509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.899753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.899786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.900004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.900036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.900331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.900364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.900613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.900646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.900887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.900918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.901192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.901224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.901465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.901696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.901729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.902027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.902059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.735 qpair failed and we were unable to recover it. 00:29:52.735 [2024-07-15 11:45:26.902372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.735 [2024-07-15 11:45:26.902404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.902682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.902714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.902938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.903174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.903206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.903372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.903405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.903711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.903743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.904063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.904096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.904381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.904415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.904718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.904750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.904968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.905005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.905212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.905504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.905537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.905770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.905803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.906091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.906325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.906360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.906559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.906591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.906804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.906836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.907075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.907293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.907327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.907525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.907557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.907772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.907803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.908083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.908115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.908331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.908365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.908653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.908934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.908967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.909235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.909277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.909482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.909514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.909808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.909840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.910165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.910197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.910509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.910542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.910688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.910721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.911027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.911058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.911337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.911370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.911670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.911702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.911982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.912013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.912318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.912352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.912576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.912609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.912921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.912953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.913165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.913197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.913367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.913401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.913676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.913708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.914017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.914049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.914327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.914360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.914668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.914700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.914982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.915168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.915201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.915458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.915492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.915708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.915740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.916036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.916068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.916370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.916408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.916610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.916643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.916914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.916946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.917159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.917191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.917479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.917512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.917754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.736 [2024-07-15 11:45:26.917786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.736 qpair failed and we were unable to recover it. 00:29:52.736 [2024-07-15 11:45:26.918138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.918170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.918438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.918472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.918754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.918786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.919090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.919122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.919417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.919450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.919769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.919801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.920029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.920332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.920366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.920614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.920646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.920918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.920951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.921247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.921290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.921499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.921531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.921777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.921809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.922131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.922163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.922464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.922497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.922782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.922815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.923114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.923146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.923439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.923473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.923774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.923806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.924090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.924122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.924394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.924428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.924686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.924719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.925024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.925056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.925345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.925378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.925685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.925717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.925975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.926302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.926335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.926556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.926589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.926876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.926910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.927143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.927175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.927448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.927482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.927797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.927830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.928125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.928157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.928479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.928513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.928786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.928824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.929132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.929164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.929312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.929345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.929547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.929579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.929851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.929884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.930165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.930197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.930443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.930476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.930747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.930780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.931107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.931139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.931425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.931759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.931791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.932101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.932133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.932341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.932374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.932626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.932658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.932797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.932830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.933040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.933072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.933312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.933346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.933623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.933656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.933989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.934021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.934293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.934327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.934573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.737 [2024-07-15 11:45:26.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.737 qpair failed and we were unable to recover it. 00:29:52.737 [2024-07-15 11:45:26.934905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.934938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.935244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.935287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.935526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.935559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.935878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.935910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.936122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.936154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.936434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.936468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.936602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.936639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.936909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.936942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.937138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.937170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.937468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.937501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.937800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.937834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.938122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.938155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.938355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.938388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.938712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.938745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.939035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.939067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.939364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.939398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.939597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.939629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.939773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.939806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.940040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.940072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.940345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.940378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.940591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.940624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.940837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.940869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.941105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.941139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.941441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.941475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.941762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.941795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.942069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.942102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.942434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.942467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.942766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.942799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.942997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.943030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.943337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.943371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.943651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.943684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.943934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.943966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.944275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.944308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.944632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.944665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.944919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.944952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.945154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.945187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.945486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.945519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.945657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.945689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.946002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.946034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.946276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.946310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.946537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.946570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.946771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.946804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.947023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.947055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.947329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.947362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.947600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.947632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.738 [2024-07-15 11:45:26.947929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.738 [2024-07-15 11:45:26.947960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.738 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.948265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.948304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.948525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.948557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.948865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.948897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.949095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.949127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.949432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.949466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.949746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.949778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.950006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.950038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.950338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.950372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.950712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.950744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.950965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.950997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.951200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.951233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.951390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.951423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.951674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.951706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.951909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.951941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.952250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.952293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.952587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.952620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.952843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.953177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.953209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.953466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.953500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.953783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.953816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.954057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.954090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.954365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.954399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.954703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.954735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.954952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.954985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.955265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.955300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.955574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.955606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.955893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.955926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.956225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.956267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.956394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.956426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.956652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.956827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.956859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.957182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.957509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.957542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.957855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.957887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.958162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.958194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.958508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.958542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.958819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.958850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.959130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.959162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.959474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.959508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.959789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.959821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.960050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.960088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.960295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.960330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.960659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.960691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.960891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.960923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.961121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.961155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.961369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.961402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.961544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.961577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.961878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.961910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.962135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.962167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.962395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.962429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.962645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.962677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.962819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.962851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.963089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.963121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.963448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.963481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.963790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.739 [2024-07-15 11:45:26.963823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.739 qpair failed and we were unable to recover it. 00:29:52.739 [2024-07-15 11:45:26.964103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.964136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.964442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.964476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.964755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.964788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.965008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.965040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.965184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.965216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.965550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.965583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.965865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.965897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.966128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.966160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.966435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.966468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.966680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.966713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.967011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.967043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.967273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.967306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.967594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.967627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.967774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.967807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.968107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.968139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.968279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.968313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.968514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.968546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.968874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.968907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.969207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.969239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.969554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.969587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.969874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.969906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.970212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.970245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.970478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.970511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.970789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.970820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.971090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.971121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.971431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.971469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.971752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.971783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.972052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.972084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.972357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.972392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.972706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.972738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.972978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.973010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.973210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.973243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.973525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.973557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.973883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.973916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.974219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.974251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.974479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.974511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.974649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.974681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.974974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.975006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.975309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.975342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.975653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.975685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.976000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.976032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.976329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.976363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.976655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.976687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.976962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.976995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.977306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.977339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.977604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.977636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.977854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.977886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.978158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.978190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.978521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.978555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.978805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.978838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.979114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.979145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.979446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.979479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.979718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.979751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.980021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.980054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.980354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.980388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.740 [2024-07-15 11:45:26.980598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.740 [2024-07-15 11:45:26.980631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.740 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.980831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.980864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.981159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.981191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.981509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.981830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.982136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.982168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.982417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.982450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.982770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.982803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.983114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.983147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.983379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.983413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.983635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.983673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.983821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.983853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.984063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.984095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.984235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.984280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.984578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.984611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.984847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.984879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.985176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.985208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.985544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.985578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.985798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.985830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.986030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.986063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.986362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.986398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.986704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.986737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.986946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.986978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.987178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.987210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.987457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.987490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.987722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.987754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.987982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.988015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.988314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.988347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.988579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.988612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.988917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.988950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.989252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.989295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.989609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.989641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.989947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.990290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.990324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.990545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.990578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.990877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.990909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.991221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.991266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.991570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.991602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.991896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.991930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.992163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.992194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.992455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.992490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.992713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.992745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.993051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.993083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.993293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.993327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.993623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.993656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.993970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.994001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.994304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.994339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.994627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.994659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.994958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.994989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.995275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.995309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.741 qpair failed and we were unable to recover it. 00:29:52.741 [2024-07-15 11:45:26.995592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.741 [2024-07-15 11:45:26.995630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.995846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.995878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.996153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.996185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.996503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.996536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.996813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.996845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.997119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.997151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.997462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.997496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.997765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.997797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.998039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.998071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.998398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.998431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.998740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.998773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.999052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.999085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.999301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.999335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.999608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.999640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:26.999948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:26.999981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.000270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.000303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.000624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.000657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.000954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.000987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.001278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.001313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.001515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.001547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.001677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.001709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.001926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.001957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.002168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.002200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.002365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.002398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.002683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.002715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.002932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.002964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.003104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.003136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.003451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.003486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.003790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.003823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.004157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.004189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.004490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.004524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.004851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.004883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.005130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.005163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.005487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.005521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.005799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.005831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.006137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.006169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.006454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.006488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.006764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.006796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.007103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.007135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.007430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.007464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.007768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.007805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.008111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.008144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.008293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.008327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.008536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.008568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.008883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.008915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.009238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.009282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.009557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.009590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.009898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.009931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.010157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.010188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.010480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.010515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.010814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.010845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.011132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.011384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.742 [2024-07-15 11:45:27.011418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.742 qpair failed and we were unable to recover it. 00:29:52.742 [2024-07-15 11:45:27.011688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.011719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.011962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.011995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.012205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.012237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.012547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.012580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.012796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.012828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.013102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.013134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.013348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.013381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.013587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.013619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.013946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.013978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.014279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.014313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.014598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.014631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.014873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.014904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.015103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.015135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.015443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.015477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.015707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.015963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.015995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.016214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.016247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.016564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.016596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.016798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.016830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.017132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.017163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.017452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.017485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.017641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.017673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.017874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.017906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.018177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.018210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.018549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.018854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.018887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.019104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.019136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.019411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.019450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.019776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.019807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.020024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.020327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.020360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.020604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.020635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.020840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.020872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.021177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.021211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.021520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.021553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.021879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.021912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.022110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.022142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.022377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.022410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.022717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.022750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.023074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.023105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.023414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.023447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.023657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.023690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.023891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.023923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.024138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.024171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.024414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.024448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.024660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.024692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.024991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.025023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.025265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.025298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.025575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.025879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.025911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.026184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.026216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.026456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.026490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.026761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.026793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.027070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.027102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.743 [2024-07-15 11:45:27.027277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.743 [2024-07-15 11:45:27.027311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.743 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.027583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.027615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.027916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.027948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.028176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.028209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.028552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.028586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.028884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.028916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.029203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.029234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.029542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.029575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.029856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.029888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.030192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.030224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.030535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.030568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.030808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.030839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.031111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.031142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.031448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.031522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.031824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.031856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.032104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.032136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.032463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.032496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.032779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.032811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.033023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.033055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.033274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.033308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.033519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.033551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.033750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.033782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.034081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.034113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.034428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.034462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.034781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.035109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.035141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.035433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.035468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.035692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.035724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.035963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.035995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.036205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.036236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.036546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.036578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.036840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.036873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.037152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.037185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.037492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.037525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.037655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.037687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.038006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.038038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.038280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.038313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.038648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.038680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.038882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.038914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.039077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.039109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.039327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.039361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.039604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.039635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.039911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.039942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.040263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.040297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.040603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.040635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.040948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.040980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.041290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.041323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.041638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.041670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.041962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.041995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.042217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.042248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.042547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.042580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.042888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.042920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.043060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.043091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.744 qpair failed and we were unable to recover it. 00:29:52.744 [2024-07-15 11:45:27.043342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.744 [2024-07-15 11:45:27.043381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.043601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.043633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.043920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.043952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.044136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.044168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.044453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.044486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.044787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.044819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.045080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.045112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.045430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.045464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.045682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.045715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.046007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.046039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.046273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.046308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.046610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.046643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.046862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.046895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.047190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.047223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.047508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.047542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.047703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.047737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.047965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.047998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.048156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.048190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.048535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.048570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.048902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.048935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.049176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.049208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.049558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.049591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.049843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.049876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.050191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.050222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.050448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.050482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.050690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.050722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.050864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.050896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.051140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.051172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.051502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.051536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.051812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.051844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.052159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.052191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.052509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.052544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.052789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.052822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.053159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.053191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.053469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.053503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.053751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.053783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.054108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.054141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.054368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.054403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.054604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.054636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.054844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.054876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.055222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.055284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.055564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.055596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.055822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.055854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.056069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.056101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.056390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.056423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.056577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.056609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.056917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.056950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.057238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.745 [2024-07-15 11:45:27.057282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.745 qpair failed and we were unable to recover it. 00:29:52.745 [2024-07-15 11:45:27.057529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.057561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.057788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.057821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.058038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.058071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.058398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.058431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.058691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.058723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.058921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.058952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.059193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.059226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.059561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.059593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.059818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.059850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.060067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.060099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.060241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.060284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.060496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.060529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.060737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.060769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.060894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.060927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.061084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.061116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.061340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.061374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.061646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.061679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.061930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.061963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.062245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.062294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.062525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.062557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.062772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.062804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.063108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.063141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.063399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.063434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.063682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.063715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.063963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.063996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.064336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.064370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.064600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.064631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.064866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.064898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.065175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.065207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.065401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.065646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.065679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.065919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.065950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.066169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.066208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.066571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.066604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.066761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.066793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.067074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.067106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.067323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.067358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.067656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.067688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.067886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.068192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.068224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.068391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.068425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.068650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.068682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.068901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.068933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.069156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.069188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.069388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.069422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.069636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.069668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.069879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.069911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.070119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.070152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.070326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.070360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.070592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.070625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.070900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.070933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.071078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.071109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.071324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.071359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.071575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.071608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.071837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.746 [2024-07-15 11:45:27.071869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.746 qpair failed and we were unable to recover it. 00:29:52.746 [2024-07-15 11:45:27.072052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.072084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.072293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.072327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.072569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.072603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.072814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.072846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.073021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.073054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.074717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.074776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.075089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.075125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.075341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.075376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.075675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.075710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.076007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.076039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.076333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.076368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.076653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.076687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.076911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.076944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.077167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.077199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.077424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.077458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.077625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.077658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.077873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.077908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.078053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.078092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.078277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.078312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.078540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.078573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.078773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.078805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.079102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.079134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.079389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.079425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.082287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.082344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.082610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.082646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.082832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.082866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.083087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.083120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.083402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.083437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.083647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.083680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.083998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.084031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.084331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.084365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.084621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.084655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.084971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.085006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.085306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.085340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.085669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.085700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.085956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.085988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.086148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.086180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.086379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.086413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.086736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.086769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.086933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.086967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.087131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.087164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.087540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.087574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.087817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.087851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.088044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.088077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.088308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.088348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.088651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.088685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.088840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.088874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.089112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.089145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.089446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.089479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.089801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.089833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.090057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.090090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.090347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.090380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.090638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.090670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.090891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.090924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.091153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.091184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.091490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.747 [2024-07-15 11:45:27.091524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.747 qpair failed and we were unable to recover it. 00:29:52.747 [2024-07-15 11:45:27.091690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.091723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.091880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.092072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.092104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.092391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.092424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.092641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.092674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.092932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.092964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.093292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.093328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.093615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.093648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.093872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.093904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.094123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.094155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.095986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.096044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.096407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.096442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.096669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.096702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.096982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.097014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.097189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.097220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.097486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.097519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.097756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.097788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.098101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.098134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.098434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.098468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.098697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.098730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.099084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.099116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.099335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.099369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.099566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.099600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.099829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.099860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.100013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.100045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.100279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.100314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.100548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.100582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.100779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.100813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.100972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.101009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.101353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.101390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.103069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.103127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.103405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.103442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.103722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.103756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.103899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.103933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.104217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.104249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.104437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.104470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.104748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.104780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.105095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.105128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.105363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.105397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.107089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.107146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.107455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.107490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.107710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.107743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.107897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.107931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.108219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.108251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.108431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.108463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.108774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.748 [2024-07-15 11:45:27.108806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.748 qpair failed and we were unable to recover it. 00:29:52.748 [2024-07-15 11:45:27.108969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.109002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.109233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.109292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.109460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.109492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.109711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.109743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.109900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.109933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.110150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.110184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.110508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.110543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.110691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.110725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.111000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.111032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.111314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.111347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.111620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.111653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.111986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.112018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.112299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.112332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.112490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.112523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.112722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.112753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.113030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.113278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.113311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.113584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.113616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.113882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.114074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.114107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.114404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.114438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.114738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.114770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.115086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.115123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.115436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.115469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.115655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.115687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.115906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.115939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.116202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.116235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.116524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.116557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.116858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.116890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.117175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.117206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.117512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.117548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.117763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.117794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.118114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.118146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.118370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.118405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.118566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.118598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.118870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.118903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.119164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.119196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.119440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.119473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.119637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.119669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.120022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.120054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.120300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.120334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.120580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.120885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.120917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.121069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.121101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.121432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.121466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.121676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.121708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.121851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.121884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.122178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.122210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.122496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.122530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.122667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.122699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.122924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.122955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.123272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.123305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.123558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.123590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.123763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.123794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.124003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.124035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.749 [2024-07-15 11:45:27.124284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.749 [2024-07-15 11:45:27.124319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.749 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.124539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.124571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.124869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.124902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.125217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.125249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.125572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.125606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.125878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.125910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.126272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.126305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.126606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.126645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.126941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.126973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.127266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.127299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.127592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.127624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.127895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.127927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.128253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.128300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.128562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.128594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.128742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.128774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.128919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.128951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.129227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.129289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.129590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.129624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.129868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.129900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.130176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.130208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.130502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.130536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.130865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.130898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.131171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.131204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.131521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.131554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.131761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.131793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.132121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.132154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.132459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.132494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.132739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.132770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.133012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.133044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.133352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.133385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.133606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.133639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.133802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.134071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.134102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.134272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.134306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.134560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.134593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.134894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.134926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.135236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.135279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.135534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.135566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.135696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.135729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.136025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.136056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.136383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.136418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.136657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.136689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.136985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.137017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.137310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.137344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.137603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.137635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.137959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.137991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.138291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.138327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.138613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.138650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.139018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.139050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.139274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.139307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.139532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.139565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.139711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.139743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.139966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.139998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.140221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.140253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.140543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.140576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.750 [2024-07-15 11:45:27.142790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.750 [2024-07-15 11:45:27.142852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.750 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.143236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.143286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.143513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.143546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.145113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.145168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.145463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.145497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.145777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.145810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.145978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.146010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.146312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.146345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.146493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.146525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.146684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.146716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.146861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.146893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.147198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.147230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.147471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.147504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.147658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.147691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.147907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.147939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.148180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.148213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.148372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.148406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.148633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.148665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.148912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.148944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.149162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.149195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.149430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.149466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.149770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.149802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.149942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.149974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.150188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.150220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.150457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.150491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.150628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.150661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.150792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.150824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.151026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.151058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.151330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.151364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.151569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.151602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.151822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.151855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.152083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.152115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.152332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.152369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.152594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.152628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.152859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.152891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.153093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.153125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.153339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.153372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.153521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.153553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.153779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.153811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.154138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.154170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.154467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.154501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.154715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.154747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.154901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.154933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.155138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.155169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.155339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.155374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.155509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.155542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.155745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.155778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.155995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.751 [2024-07-15 11:45:27.156027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.751 qpair failed and we were unable to recover it. 00:29:52.751 [2024-07-15 11:45:27.156324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.156357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.156630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.156661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.156897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.156929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.157069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.157101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.157224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.157324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.157539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.157572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.157793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.157825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.158032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.158064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.158209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.158241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.158550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.158583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.158784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.158815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.158963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.158996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.159152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.159184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.159405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.159439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.159651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.159683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.159888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.159919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.160158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.160189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.160432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.160466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.160737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.160769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.160916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.160948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.161156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.161189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.161318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.161350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.161501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.161534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.161677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.161709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.161939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.161975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.162174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.162207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.162357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.162390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.162612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.162644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.162895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.162927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.163131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.163163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.163326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.163360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.163602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.163634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.163909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.163941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.164139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.164171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.164367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.164400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.164545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.164576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.164861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.164894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.165039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.165071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.165226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.165298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.165503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.165536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.165757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.165789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.165944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.166188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.166478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.166512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.166759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.166791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.167040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.167071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.167229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.167275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.167420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.167453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.167575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.167605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.167823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.167855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.168131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.168162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.168378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.168412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.168616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.168648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.168858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.752 [2024-07-15 11:45:27.168889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.752 qpair failed and we were unable to recover it. 00:29:52.752 [2024-07-15 11:45:27.169088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.169120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.169284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.169316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.169521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.169553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.169775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.169807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.170063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.170239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.170416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.170601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.170769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.170980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.171013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.171213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.171250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.171481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.171513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.171745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.171777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.171917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.171949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.172280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.172313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.172613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.172645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.172894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.172926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.173162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.173194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.173407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.173442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.173656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.173688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.173901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.173935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.174089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.174122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.174329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.174363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.174510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.174542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.174841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.174874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.175151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.175183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.175324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.175357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.175625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.175657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.175876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.175908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.177187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.177241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.177521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.177556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.177826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.177859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.178019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.178053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.178278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.178313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.178535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.178569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.178719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.178751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.178894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.178926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.179224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.179271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.179417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.179449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.179654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.179685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.180005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.180037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.180177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.180210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.180365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.180397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.180642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.180674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.180837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.180870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.181137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.181168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.181329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.181363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.181657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.181688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.181960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.181992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.182123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.182157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.182387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.182427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.182633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.182665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.182944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.182976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.183193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.183225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.183435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.753 [2024-07-15 11:45:27.183679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.753 [2024-07-15 11:45:27.183711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.753 qpair failed and we were unable to recover it. 00:29:52.754 [2024-07-15 11:45:27.183930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.754 [2024-07-15 11:45:27.183961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.754 qpair failed and we were unable to recover it. 00:29:52.754 [2024-07-15 11:45:27.184138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.754 [2024-07-15 11:45:27.184170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.754 qpair failed and we were unable to recover it. 00:29:52.754 [2024-07-15 11:45:27.184327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.754 [2024-07-15 11:45:27.184360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:52.754 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.184618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.184651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.184791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.184824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.185064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.185096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.185306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.185342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.185493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.185526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.185727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.185760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.186033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.186067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.186346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.186379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.186625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.186657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.186952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.186984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.187128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.187159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.187303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.187361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.187494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.187525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.187769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.187801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.187924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.187957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.188177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.188208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.188427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.188461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.188676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.188709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.188943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.188976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.189188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.189219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.189393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.189426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.189624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.189655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.189883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.189915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.190106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.190138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.190302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.190533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.190564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.190776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.190808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.029 qpair failed and we were unable to recover it. 00:29:53.029 [2024-07-15 11:45:27.191010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.029 [2024-07-15 11:45:27.191041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.191308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.191342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.191594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.191627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.191841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.191872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.192139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.192177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.192388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.192420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.192628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.192659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.192874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.192906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.193046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.193077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.193345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.193378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.193601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.193632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.193844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.193876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.194013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.194045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.194192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.194225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.194482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.194553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.194883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.194917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.195130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.195161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.195316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.195349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.195563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.195595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.195743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.195775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.195913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.195945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.196083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.196114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.196246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.196291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.196508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.196541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.196766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.196796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.197008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.197039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.197184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.197215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.197368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.197401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.197636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.197668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.197885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.197916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.198065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.198098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.198252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.030 [2024-07-15 11:45:27.198296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.030 qpair failed and we were unable to recover it. 00:29:53.030 [2024-07-15 11:45:27.198427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.198458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.198665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.198841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.198872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.199030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.199061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.199197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.199229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.199465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.199497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.199617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.199649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.199775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.199806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.200052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.200084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.200412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.200445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.202060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.202115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.202273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.202307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.202521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.202561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.202759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.202791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.202998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.203030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.203190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.203221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.203435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.203467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.203605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.203636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.203840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.203872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.204064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.204096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.204313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.204346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.204552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.204584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.204779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.204812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.205016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.205046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.205182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.205213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.205371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.205403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.205535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.205566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.205709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.031 [2024-07-15 11:45:27.205741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.031 qpair failed and we were unable to recover it. 00:29:53.031 [2024-07-15 11:45:27.206009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.206040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.206159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.206335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.206368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.206567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.206599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.206894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.206926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.207105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.207136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.207294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.207454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.207485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.207701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.207732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.207864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.207895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.208084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.208269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.208303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.208506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.208538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.208681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.208713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.208846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.208877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.209137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.209169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.209356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.209390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.209598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.209630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.209826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.209857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.210005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.210037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.210230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.210271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.210488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.210520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.210666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.210697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.210888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.210921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.211043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.211079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.211275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.211308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.211501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.211533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.211740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.032 [2024-07-15 11:45:27.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.032 qpair failed and we were unable to recover it. 00:29:53.032 [2024-07-15 11:45:27.211956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.211988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.212183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.212214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.212419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.212452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.212584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.212616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.212750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.212781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.212969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.213001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.213291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.213324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.213535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.213566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.213696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.213726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.213889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.214061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.214298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.214526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.214680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.214846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.214978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.215009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.215139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.215179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.216708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.216761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.216916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.216950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.217243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.217332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.217575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.217606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.217807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.217839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.217993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.218025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.218188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.218220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.218449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.218482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.218684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.218716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.218951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.218983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.219251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.219294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.219499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.219530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.033 qpair failed and we were unable to recover it. 00:29:53.033 [2024-07-15 11:45:27.219740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.033 [2024-07-15 11:45:27.219772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.219907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.219939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.220164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.220196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.220469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.220502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.220647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.220680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.220806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.220838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.220957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.220988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.221116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.221154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.221444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.221477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.221600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.221632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.221775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.221807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.221929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.221960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.222093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.222124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.222314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.222346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.222637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.222668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.222818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.222850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.223044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.223075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.223367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.223400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.223521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.223552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.223753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.223784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.224050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.224082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.224247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.224287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.224480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.224511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.224632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.224664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.034 [2024-07-15 11:45:27.224808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.034 [2024-07-15 11:45:27.224839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.034 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.224963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.224995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.225214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.225245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.225424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.225457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.225585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.225617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.225805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.225836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.225959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.225991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.226198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.226229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.226450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.226482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.226701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.226733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.226946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c7e60 is same with the state(5) to be set 00:29:53.035 [2024-07-15 11:45:27.227187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.227234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.227499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.227526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.227660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.227680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.227853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.227873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.227966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.227984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.228106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.228126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.228342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.228361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.228629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.228650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.228859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.228879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.229080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.229291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.229496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.229608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.229807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.229980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.230111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.230316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.230456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.230568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.230831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.035 [2024-07-15 11:45:27.230850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.035 qpair failed and we were unable to recover it. 00:29:53.035 [2024-07-15 11:45:27.231013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.231136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.231273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.231399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.231684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.231883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.231903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.232025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.232048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.232293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.232315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.232420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.232439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.232625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.232645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.232850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.232869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.233104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.233124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.233304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.233325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.233432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.233451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.233613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.233633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.233792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.233810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.234915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.234935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.235057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.235076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.235343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.235364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.235496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.235515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.235709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.235728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.235840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.235859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.236954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.236973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.237166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.237186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.237353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.237373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.237552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.237571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.237819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.237838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.238023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.238042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.238249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.238277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.036 [2024-07-15 11:45:27.238458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.036 [2024-07-15 11:45:27.238478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.036 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.238653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.238672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.238912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.238932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.239094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.239113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.239286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.239306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.239482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.239502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.239615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.239638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.239833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.240935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.240955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.241923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.241942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.242926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.242946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.243129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.243149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.243268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.243288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.243394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.243413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.243670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.243689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.243816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.243835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.244013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.244032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.244218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.244238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.244546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.244617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.244870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.244904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.245095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.245127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.245317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.245339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.245513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.245533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.245642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.245661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.245847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.245866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.246145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.246371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.246552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.246675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.246784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.246994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.247014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.247131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.037 [2024-07-15 11:45:27.247152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.037 qpair failed and we were unable to recover it. 00:29:53.037 [2024-07-15 11:45:27.247388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.247408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.247686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.247706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.247870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.247889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.248052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.248071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.248253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.248277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.248438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.248458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.248659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.248679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.248844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.248863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.249039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.249058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.249156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.249176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.249412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.249433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.249666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.249685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.249934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.249954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.250144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.250163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.250337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.250357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.250464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.250485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.250715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.250735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.250968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.250988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.251080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.251099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.251302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.251323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.251527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.251546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.251804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.251824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.251951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.251970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.252145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.252164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.252371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.252391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.252599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.252619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.252879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.252898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.253977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.253997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.254177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.254197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.254314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.254334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.254444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.254464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.254638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.254657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.254839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.254859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.255039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.255059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.255222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.255245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.255470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.255489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.255603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.255622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.255881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.255902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.256101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.256120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.256384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.256404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.256520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.256538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.038 [2024-07-15 11:45:27.256666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.038 [2024-07-15 11:45:27.256685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.038 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.256873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.256892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.257155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.257174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.257365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.257384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.257492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.257512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.257750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.257769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.258007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.258026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.258228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.258248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.258462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.258482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.258646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.258666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.258910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.258930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.259172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.259191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.259376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.259397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.259572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.259591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.259778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.259797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.259962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.259982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.260146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.260165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.260274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.260294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.260565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.260585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.260713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.260732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.260983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.261123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.261250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.261467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.261650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.261946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.261966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.262238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.262268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.262507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.262527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.262786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.262805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.262901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.262920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.263187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.263206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.263468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.263489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.263619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.263638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.263757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.263779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.264042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.264062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.264229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.264249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.264512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.264532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.264695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.264714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.264910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.264929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.265060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.265079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.039 [2024-07-15 11:45:27.265262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.039 [2024-07-15 11:45:27.265282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.039 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.265567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.265587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.265710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.265729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.265960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.265980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.266168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.266187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.266456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.266476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.266707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.266726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.266892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.266911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.267951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.267969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.268073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.268092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.268270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.268290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.268561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.268582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.268700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.268719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.268949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.268969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.269055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.269073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.269176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.269196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.269402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.269422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.269541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.269560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.269814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.269833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.270965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.270985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.271188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.271208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.271380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.271400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.271530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.271549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.271710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.271733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.271913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.271932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.272106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.272125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.272291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.272311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.272400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.272419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.272538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.272558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.040 qpair failed and we were unable to recover it. 00:29:53.040 [2024-07-15 11:45:27.272718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.040 [2024-07-15 11:45:27.272738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.272840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.272859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.273933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.273953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.274131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.274243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.274276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.274435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.274455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.274712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.274732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.274839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.274858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.275052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.275072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.275231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.275251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.275497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.275517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.275612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.275631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.275888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.275907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.276010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.276029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.276298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.276318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.276404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.276422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.276518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.276538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.276799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.276818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.277082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.277285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.277305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.277537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.277557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.277731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.277750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.277918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.277938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.278052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.278071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.278274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.278294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.278524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.278544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.278801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.278820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.279102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.279122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.279393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.279413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.279588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.279607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.041 [2024-07-15 11:45:27.279787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.041 [2024-07-15 11:45:27.279807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.041 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.280904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.280923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.281950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.281969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.282099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.282301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.282488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.282621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.282801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.282984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.283091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.283222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.283423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.283728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.283837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.283855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.284946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.284965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.285069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.285088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.285352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.285372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.285481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.285673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.285692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.285927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.285945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.286206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.286225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.286464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.042 [2024-07-15 11:45:27.286485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.042 qpair failed and we were unable to recover it. 00:29:53.042 [2024-07-15 11:45:27.286650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.286669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.286836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.286855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.287074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.287094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.287383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.287403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.287646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.287665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.287827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.287846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.288938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.288958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.289135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.289154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.289344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.289365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.289633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.289653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.289825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.289844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.290027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.290047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.290229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.290248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.290457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.290476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.290635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.290654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.290826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.290845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.291105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.291124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.291326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.291347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.291474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.291493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.291676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.291695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.291804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.291824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.292011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.292030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.292128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.292148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.292382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.292405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.292579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.292598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.043 [2024-07-15 11:45:27.292710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.043 [2024-07-15 11:45:27.292729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.043 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.292853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.292873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.293146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.293165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.293420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.293441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.293697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.293716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.293885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.293904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.294082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.294102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.294209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.294227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.294497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.294517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.294763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.294782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.294959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.294978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.295180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.295199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.295434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.295454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.295741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.295760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.295953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.295972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.296234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.296253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.296412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.296431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.296595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.296614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.296867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.296886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.297857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.297875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.298040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.298059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.298279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.298562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.298581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.298778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.298798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.298956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.299180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.299199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.299410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.299430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.299689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.299709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.299846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.299866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.300047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.300066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.300167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.300186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.300436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.300456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.300630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.300649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.300813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.300836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.301874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.301997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.302016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.302264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.302284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.044 [2024-07-15 11:45:27.302399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.044 [2024-07-15 11:45:27.302417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.044 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.302533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.302718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.302737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.303031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.303233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.303361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.303561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.303806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.303985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.304004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.304166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.304186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.304375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.304396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.304653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.304671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.304865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.304884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.305852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.305871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.306048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.306227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.306381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.306690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.306836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.306988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.307008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.307198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.307217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.307510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.307529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.307763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.307782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.307958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.307976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.308232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.308252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.308453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.308472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.308726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.308748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.308942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.308962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.309137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.309155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.309322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.309343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.309595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.309614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.309845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.309864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.310055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.310074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.310234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.310253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.310364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.310383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.310639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.310658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.310856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.310875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.311961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.311981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.312151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.312170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.312343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.312363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.312467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.312486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.312652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.312671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.312920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.312939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.313117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.045 [2024-07-15 11:45:27.313137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.045 qpair failed and we were unable to recover it. 00:29:53.045 [2024-07-15 11:45:27.313279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.313469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.313491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.313756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.313775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.314907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.314926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.315940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.315958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.316977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.316996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.317899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.317918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.318150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.318169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.318337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.318357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.318589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.318609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.318806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.318825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.319108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.319128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.319309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.319329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.319593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.319613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.319798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.319818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.319912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.319932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.320124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.320143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.320323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.320344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.320515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.320535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.320645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.320664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.320769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.320788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.321050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.321069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.321304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.321324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.321439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.321459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.321697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.321716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.321905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.321924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.322927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.322945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.323179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.323198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.323362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.046 qpair failed and we were unable to recover it. 00:29:53.046 [2024-07-15 11:45:27.323613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.046 [2024-07-15 11:45:27.323633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.323794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.323814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.323988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.324007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.324266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.324289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.324536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.324556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.324673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.324692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.324873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.324892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.325083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.325102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.325267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.325288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.325427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.325446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.325612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.325631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.325916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.325935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.326026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.326044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.326207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.326227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.326430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.326450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.326643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.326662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.326845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.326864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.327939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.327959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.328092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.328287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.328480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.328696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.328880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.328994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.329012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.329121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.329325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.329345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.329576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.329595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.329782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.330078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.330098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.330198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.330218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.330468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.330488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.330650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.330670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.330838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.330857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.331858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.331880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.332052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.332072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.332176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.332196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.332381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.332401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.332587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.047 [2024-07-15 11:45:27.332607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.047 qpair failed and we were unable to recover it. 00:29:53.047 [2024-07-15 11:45:27.332714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.332943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.332963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.333123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.333143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.333316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.333336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.333510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.333529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.333687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.333706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.333868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.334972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.334991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.335262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.335283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.335443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.335462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.335568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.335588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.335676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.335694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.335877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.335896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.336921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.336940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.337105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.337123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.337424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.337445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.337583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.337603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.337817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.338147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.338166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.338425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.338445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.338642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.338661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.338767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.338786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.338958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.338977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.339883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.339902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.340064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.340083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.340192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.340211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.340377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.340398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.340590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.340610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.340795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.340814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.341960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.341979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.342235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.342259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.342428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.342448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.342689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.342708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.342800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.342818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.048 [2024-07-15 11:45:27.343049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.048 [2024-07-15 11:45:27.343069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.048 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.343270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.343290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.343411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.343430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.343591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.343610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.343707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.343726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.343904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.343923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.344900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.344920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.345851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.345870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.346099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.346119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.346280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.346300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.346505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.346527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.346769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.346788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.346898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.346918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.347129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.347148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.347383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.347403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.347499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.347696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.347715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.347873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.347893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.348013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.348032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.348209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.348228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.348409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.348626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.348645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.348832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.348851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.349943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.349963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.350122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.350232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.350251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.350438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.350457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.350633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.350652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.350824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.350843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.351863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.351882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.352047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.352066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.352229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.352248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.352515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.352534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.352644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.352664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.352927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.352947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.353151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.353169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.353410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.353431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.353546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.049 [2024-07-15 11:45:27.353565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.049 qpair failed and we were unable to recover it. 00:29:53.049 [2024-07-15 11:45:27.353672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.353692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.353897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.353916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.354906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.354925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.355118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.355137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.355303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.355324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.355428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.355448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.355558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.355578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.355856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.355875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.356954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.356973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.357136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.357155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.357333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.357353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.357470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.357489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.357779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.357997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.358951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.358970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.359135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.359154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.359333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.359353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.359460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.359479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.359710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.359729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.359914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.359933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.360161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.360180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.360442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.360462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.360698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.360718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.360972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.360991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.361164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.361183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.361347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.361367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.361499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.361518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.361671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.361693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.361831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.361850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.362946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.362965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.363096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.363115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.363230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.363249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.363476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.363507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.050 [2024-07-15 11:45:27.363720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.050 [2024-07-15 11:45:27.363751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.050 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.363869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.363899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.364099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.364118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.364249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.364274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.364373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.364393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.364558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.364578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.364760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.364779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.365945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.365964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.366058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.366077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.366250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.366276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.366384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.366403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.366603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.366622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.366808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.366827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.367006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.367025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.367141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.367160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.367334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.367354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.367629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.367649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.367880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.367899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.368919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.368939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.369127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.369149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.369352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.369372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.369601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.369620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.369716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.369734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.369841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.369860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.370022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.370041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.370228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.370247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.370435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.370454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.370571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.370590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.370784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.370804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.371097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.371116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.371349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.371369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.371483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.371501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.371730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.371749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.371859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.371879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.372112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.372131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.372236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.372259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.372363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.372382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.372557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.372878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.372897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.373073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.373092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.373357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.373377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.373562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.373582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.373712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.373732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.373919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.373940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.374106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.374127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.374301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.374321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.374436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.374456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.374658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.374679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.374854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.374875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.375060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.375082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.375317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.051 [2024-07-15 11:45:27.375338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.051 qpair failed and we were unable to recover it. 00:29:53.051 [2024-07-15 11:45:27.375503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.375523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.375610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.375629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.375792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.375813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.375926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.375946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.376133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.376153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.376345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.376366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.376549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.376569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.376735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.376754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.376940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.376961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.377175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.377196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.377497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.377518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.377696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.377717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.377978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.377999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.378111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.378132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.378303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.378325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.378480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.378500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.378688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.378709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.378845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.378866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.379151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.379172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.379420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.379441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.379568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.379589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.379847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.379867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.380951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.380971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.381875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.381895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.382860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.382880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.383146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.383166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.383277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.383297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.383468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.383487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.383718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.383737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.383855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.383874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.384041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.384060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.384166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.384494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.384513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.384686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.384704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.384868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.384886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.385964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.385984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.386146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.386165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.386395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.386415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.386648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.386667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.386833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.052 [2024-07-15 11:45:27.386852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.052 qpair failed and we were unable to recover it. 00:29:53.052 [2024-07-15 11:45:27.386952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.386971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.387223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.387243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.387417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.387437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.387544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.387564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.387745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.387765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.388939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.388959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.389137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.389156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.389284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.389306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.389480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.389500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.389589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.389613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.389848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.389867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.390048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.390067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.390232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.390251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.390433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.390452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.390552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.390570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.390729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.390748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.391008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.391027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.391264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.391283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.391457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.391476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.391655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.391674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.391843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.391862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.392078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.392097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.392233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.392252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.392436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.392455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.392709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.392728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.393855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.393874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.394055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.394073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.394251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.394277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.394450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.394469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.394583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.394601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.394763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.394782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.395823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.395843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.396022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.396042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.396132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.053 [2024-07-15 11:45:27.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.053 qpair failed and we were unable to recover it. 00:29:53.053 [2024-07-15 11:45:27.396242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.396267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.396430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.396450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.396561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.396580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.396786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.396805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.396931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.396951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.397066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.397252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.397481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.397667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.397798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.397993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.398247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.398392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.398585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.398703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.398898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.398917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.399906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.399925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.400852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.400871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.401064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.401084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.401269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.401289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.401454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.401474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.401657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.401676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.401954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.402050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.402070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.402278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.402297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.402457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.402476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.402641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.402660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.402846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.402866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.054 [2024-07-15 11:45:27.403881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.054 [2024-07-15 11:45:27.403900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.054 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.404905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.404924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.405957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.405976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.406887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.406998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.407297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.407430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.407614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.407797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.407930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.407949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.408125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.408144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.408237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.408261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.408364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.408384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.408673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.408692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.408816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.408835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.409876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.409896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.055 [2024-07-15 11:45:27.410899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.055 [2024-07-15 11:45:27.410919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.055 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.411933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.411953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.412135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.412154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.412338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.412358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.412461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.412481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.412700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.412719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.412883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.412902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.413920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.413939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.414102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.414121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.414283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.414304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.414485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.414506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.414667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.414686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.414793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.414812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.415800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.415819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.416968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.416987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.417974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.417994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.418226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.056 [2024-07-15 11:45:27.418246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.056 qpair failed and we were unable to recover it. 00:29:53.056 [2024-07-15 11:45:27.418453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.418473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.418643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.418663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.418766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.418785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.418980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.418999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.419189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.419208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.419318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.419338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.419582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.419692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.419711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.419807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.419826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.420055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.420075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.420239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.420266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.420446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.420466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.420647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.420667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.420776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.420796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.421065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.421084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.421191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.421211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.421444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.421464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.421653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.421673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.421835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.421855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.422919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.422939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.423125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.423144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.423240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.423265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.423438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.423457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.423686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.423938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.423958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.424964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.424984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.425214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.425233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.425405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.425425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.425524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.425542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.425720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.425739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.425900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.425919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.426022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.426041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.426207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.426226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.426414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.426433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.426639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.426658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.426825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.426845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.057 [2024-07-15 11:45:27.427790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.057 [2024-07-15 11:45:27.427809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.057 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.427990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.428954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.428973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.429094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.429368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.429388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.429554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.429574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.429843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.429861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.429969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.429988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.430978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.430998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.431960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.431979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.432149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.432168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.432339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.432359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.432544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.432564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.432801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.432820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.432928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.432948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.433145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.433423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.433636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.433755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.433883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.433997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.434747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.434991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.435010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.435223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.435243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.435510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.435530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.435820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.435839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.435955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.435974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.436056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.436074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.436246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.436271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.436482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.436501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.436675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.436695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.436795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.436815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.437066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.437085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.437182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.437201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.437311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.058 [2024-07-15 11:45:27.437331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.058 qpair failed and we were unable to recover it. 00:29:53.058 [2024-07-15 11:45:27.437589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.437608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.437779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.437799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.438037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.438056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.438235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.438270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.438532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.438552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.438655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.438795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.438814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.439917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.439936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.440100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.440119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.440356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.440376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.440549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.440569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.440813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.440833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.441007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.441026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.441204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.441223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.441418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.441438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.441630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.441649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.441939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.441958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.442127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.442146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.442260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.442281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.442447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.442466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.442646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.442665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.442839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.442858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.443023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.443042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.443308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.443328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.443752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.443771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.443937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.443956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.444123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.444143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.444312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.444331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.444439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.444458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.444690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.444709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.444875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.444894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.445958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.445977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.446963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.446981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.447182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.447290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.447509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.447690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.059 [2024-07-15 11:45:27.447890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.059 qpair failed and we were unable to recover it. 00:29:53.059 [2024-07-15 11:45:27.447987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.448198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.448395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.448586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.448839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.448969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.448988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.449155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.449174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.449327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.449348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.449443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.449462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.449656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.449676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.449880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.449899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.450177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.450196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.450362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.450382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.450570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.450590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.450753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.450773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.450937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.450956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.451241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.451267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.451436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.451455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.451636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.451656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.451815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.451834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.451955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.451974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.452945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.452964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.453943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.453962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.454935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.454955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.455896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.455915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.456044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.456063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.456260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.456280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.060 [2024-07-15 11:45:27.456440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.060 [2024-07-15 11:45:27.456460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.060 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.456704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.456723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.456889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.456909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.457162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.457181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.457357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.457552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.457572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.457730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.457749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.457949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.457968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.458147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.458166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.458406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.458426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.458518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.458536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.458705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.458724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.458907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.458926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.459022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.459039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.459209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.459229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.459414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.459434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.459635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.459654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.459915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.459934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.460029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.460048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.460145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.460165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.460397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.460416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.460688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.460707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.460900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.460919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.461032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.461263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.461283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.461416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.461435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.461689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.461873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.461898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.462863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.462883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.463065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.463084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.463188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.463207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.463387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.463408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.463544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.463563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.463823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.463843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.464835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.464854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.465025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.465044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.465205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.465224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.465391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.465411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.465637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.465656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.465920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.465939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.466198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.466217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.466378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.466398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.466505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.466524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.466752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.466772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.061 [2024-07-15 11:45:27.466890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.061 [2024-07-15 11:45:27.466908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.061 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.467849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.467869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.468165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.468373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.468758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.468891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.468986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.469113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.469369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.469491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.469616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.469898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.469917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.470969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.470988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.471148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.471166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.471288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.471308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.471585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.471604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.471724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.471743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.471944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.471963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.472142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.472161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.472394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.472414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.472588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.472607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.472767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.472787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.472953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.472972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.473140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.473160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.473288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.473308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.473550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.473570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.473768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.473787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.473978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.473997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.474231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.474258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.474443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.474462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.474581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.474600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.474804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.474824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.474930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.474950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.062 [2024-07-15 11:45:27.475206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.062 [2024-07-15 11:45:27.475293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.062 qpair failed and we were unable to recover it. 00:29:53.342 [2024-07-15 11:45:27.475606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.342 [2024-07-15 11:45:27.475642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.475879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.475910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.476370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.476408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.476764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.476799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.476929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.476961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.477221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.477262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.477387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.477418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.477715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.477746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.477880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.477912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.478224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.478290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.478426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.478446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.478542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.478562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.478724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.478743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.478856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.478887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.479189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.479412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.479629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.479769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.479880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.479989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.480170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.480308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.480518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.480636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.480827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.480846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.481937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.481955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.482066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.482086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.482347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.482366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.482555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.482574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.482770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.482792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.482896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.482915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.483056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.483075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.483245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.483282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.483392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.483411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.483671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.483690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.343 [2024-07-15 11:45:27.483859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.343 [2024-07-15 11:45:27.483879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.343 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.484239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.484372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.484562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.484823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.484987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.485193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.485378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.485528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.485810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.485950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.485969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.486140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.486159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.486326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.486345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.486530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.486549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.486784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.486803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.486989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.487176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.487319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.487601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.487724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.487855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.487875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.488974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.488993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.489108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.489127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.489358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.489379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.489553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.489572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.489732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.489752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.489912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.489932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.490103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.490122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.490287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.490311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.490568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.490588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.490763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.490783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.491022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.491042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.491140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.491159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.491345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.491365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.344 [2024-07-15 11:45:27.491467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-15 11:45:27.491486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.344 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.491608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.491628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.491788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.491807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.492861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.492881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.493938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.493958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.494130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.494150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.494358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.494378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.494542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.494561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.494658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.494677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.494867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.494886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.495096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.495116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.495291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.495311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.495558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.495577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.495813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.495832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.496978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.496997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.497162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.497181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.497376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.497396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.497587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.497607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.497710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.497732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.497989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-15 11:45:27.498873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.345 qpair failed and we were unable to recover it. 00:29:53.345 [2024-07-15 11:45:27.498991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.499179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.499377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.499582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.499714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.499854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.499873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.500845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.500864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.501976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.501995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.502105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.502124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.502353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.502373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.502552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.502571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.502668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.502686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.502856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.502876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.503076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.503095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.503253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.503277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.503487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.503507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.503629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.503648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.503846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.503865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.504123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.504431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.504451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.504658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.504677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.504845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.504864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.505965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.505984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.506145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.506165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.506335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.506355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.506521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.346 [2024-07-15 11:45:27.506540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.346 qpair failed and we were unable to recover it. 00:29:53.346 [2024-07-15 11:45:27.506723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.506742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.506918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.506937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.507941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.507960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.508918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.508938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.509116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.509305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.509449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.509680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.509872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.509988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.510973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.511154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.511174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.511355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.511375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.511615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.511635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.511817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.511837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.512009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.512028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.512202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.512224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.512516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.347 [2024-07-15 11:45:27.512537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.347 qpair failed and we were unable to recover it. 00:29:53.347 [2024-07-15 11:45:27.512712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.512731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.512861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.512879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.512970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.512990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.513295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.513368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.513535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.513568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.513783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.513804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.513977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.513996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.514895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.514914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.515088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.515107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.515288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.515309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.515403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.515421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.515598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.515617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.515882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.515901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.516010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.516287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.516307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.516493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.516513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.516681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.516700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.516912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.516931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.517161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.517181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.517371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.517391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.517681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.517701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.517862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.517881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.518866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.518886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.519956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.519979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.520082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.520100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.348 [2024-07-15 11:45:27.520197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.348 [2024-07-15 11:45:27.520215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.348 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.520422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.520442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.520649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.520668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.520923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.520942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.521100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.521119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.521339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.521358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.521552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.521570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.521729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.521748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.521942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.521960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.522933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.522952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.523113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.523132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.523407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.523427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.523660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.523679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.523773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.523792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.523964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.524100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.524119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.524252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.524277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.524512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.524531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.524692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.524711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.524965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.524984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.525908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.525926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.526157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.526176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.526336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.526357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.526557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.526577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.526743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.526762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.526991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.527130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.527326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.527508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.527730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.527927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.527946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.349 qpair failed and we were unable to recover it. 00:29:53.349 [2024-07-15 11:45:27.528113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.349 [2024-07-15 11:45:27.528132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.528301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.528321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.528607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.528626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.528716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.528734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.528992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.529011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.529287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.529306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.529419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.529438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.529615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.529634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.529924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.529944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.530175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.530194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.530393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.530413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.530577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.530596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.530788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.530807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.530997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.531016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.531201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.531221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.531478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.531498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.531740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.531758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.531925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.531944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.532192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.532211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.532375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.532395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.532597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.532616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.532774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.532793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.532924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.532943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.533184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.533203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.533443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.533466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.533740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.533760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.533863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.533883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.534040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.534060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.534318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.534339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.534549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.534570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.534776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.534795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.534889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.534908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.535112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.535132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.535239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.535277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.535533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.535552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.535713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.535732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.535845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.535864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.536024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.536043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.536161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.536181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.536371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.536391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.536553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.536572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.350 qpair failed and we were unable to recover it. 00:29:53.350 [2024-07-15 11:45:27.536779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.350 [2024-07-15 11:45:27.536799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.536982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.537107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.537204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.537432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.537627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.537905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.537924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.538104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.538123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.538280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.538301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.538480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.538499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.538601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.538619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.538876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.538895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.539156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.539174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.539433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.539453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.539565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.539584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.539709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.539727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.539841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.539861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.540866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.540888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.541932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.541952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.542187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.542207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.542443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.542463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.542561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.542581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.542774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.542793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.542996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.543015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.543174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.351 [2024-07-15 11:45:27.543193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.351 qpair failed and we were unable to recover it. 00:29:53.351 [2024-07-15 11:45:27.543351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.543371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.543642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.543661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.543840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.543860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.544981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.545000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.545161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.545180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.545444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.545464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.545643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.545663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.545835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.545855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.546019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.546039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.546220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.546239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.546421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.546441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.546633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.546652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.546822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.546841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.547101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.547121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.547309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.547329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.547509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.547528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.547768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.547787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.547997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.548193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.548388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.548506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.548711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.548898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.548917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.549112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.549131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.549243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.549269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.549450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.549470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.549648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.549667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.549913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.549932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.550182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.550442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.550556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.550684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.550881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.550998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.551017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.352 [2024-07-15 11:45:27.551187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.352 [2024-07-15 11:45:27.551206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.352 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.551305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.551324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.551402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.551419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.551527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.551547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.551729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.551749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.551895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.551915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.552091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.552109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.552376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.552395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.552658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.552677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.552801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.552820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.552970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.552989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.553253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.553276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.553450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.553470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.553642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.553662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.553869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.554843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.554862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.555870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.555986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.556008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.556138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.556157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.556350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.556371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.556555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.556574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.556828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.556848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.557026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.557044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.557266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.557285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.557465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.557484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.557657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.557676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.557784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.557803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.558079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.558098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.558274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.558294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.558453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.558472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.558650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.558669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.353 qpair failed and we were unable to recover it. 00:29:53.353 [2024-07-15 11:45:27.558879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.353 [2024-07-15 11:45:27.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.559936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.559955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.560140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.560159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.560251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.560277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.560408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.560427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.560636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.560655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.560794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.560814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.561012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.561031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.561277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.561297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.561542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.561561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.561667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.561687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.561779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.561797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.562919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.562938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.563959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.563978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.564139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.564158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.564339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.564359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.564645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.564664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.564777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.564796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.564987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.565820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.565993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.566011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.354 qpair failed and we were unable to recover it. 00:29:53.354 [2024-07-15 11:45:27.566109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.354 [2024-07-15 11:45:27.566127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.566250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.566276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.566367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.566385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.566642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.566661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.566837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.566857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.567034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.567054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.567311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.567331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.567423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.567445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.567697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.567716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.568958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.568977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.569136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.569155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.569326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.569346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.569504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.569523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.569753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.569772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.569947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.569967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.570070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.570090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.570378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.570398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.570642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.570661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.570772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.570791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.570991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.571120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.571319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.571452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.571629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.571917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.571936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.572119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.572139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.572305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.572324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.572484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.572504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.572689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.572708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.572897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.572916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.355 [2024-07-15 11:45:27.573091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.355 [2024-07-15 11:45:27.573110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.355 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.573381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.573401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.573563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.573582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.573753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.573773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.573873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.573892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.574970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.574989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.575972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.575991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.576230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.576451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.576470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.576644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.576664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.576826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.576845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.577096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.577115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.577275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.577295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.577424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.577443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.577630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.577649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.577830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.577849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.578958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.578977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.579177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.579196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.579505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.579525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.579632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.579651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.579812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.579832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.580040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.580309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.580329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.580423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.580440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.356 [2024-07-15 11:45:27.580712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.356 [2024-07-15 11:45:27.580732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.356 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.580838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.580856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.581877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.581897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.582902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.582922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.583809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.583829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.584853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.584872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.585108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.585227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.585417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.585531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.585733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.585982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.586001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.586258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.586278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.586355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.586373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.586581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.586600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.586793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.586812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.587107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.587126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.587229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.587249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.587369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.587389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.587628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.587647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.587806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.357 [2024-07-15 11:45:27.587825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.357 qpair failed and we were unable to recover it. 00:29:53.357 [2024-07-15 11:45:27.588158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.588177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.588412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.588432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.588527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.588546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.588711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.588730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.588902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.588921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.589078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.589097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.589286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.589305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.589479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.589499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.589669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.589688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.589785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.589805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.590031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.590302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.590322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.590435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.590645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.590664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.590827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.590846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.591813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.591832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.592944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.592964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.593057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.593076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.593266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.593286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.593385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.593408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.593585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.593604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.593838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.593857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.594834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.594853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.595066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.595085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.595317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.595337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.595448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.358 [2024-07-15 11:45:27.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.358 qpair failed and we were unable to recover it. 00:29:53.358 [2024-07-15 11:45:27.595699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.595718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.595819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.595837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.596960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.596980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.597917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.597936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.598104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.598124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.598354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.598374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.598537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.598568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.598716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.598747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.599036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.599190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.599392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.599522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.599728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.599992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.600023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.600160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.600190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.600321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.600354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.600633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.600672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.600848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.601117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.601136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.601368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.601388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.601605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.601624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.601737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.601756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.601915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.601934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.602168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.602187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.602468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.602500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.602650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.602680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.602995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.603026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.603279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.603311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.603595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.603626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.603933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.359 [2024-07-15 11:45:27.603964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.359 qpair failed and we were unable to recover it. 00:29:53.359 [2024-07-15 11:45:27.604275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.604313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.604538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.604570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.604770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.605020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.605051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.605331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.605364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.605582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.605601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.605718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.605749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.605888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.605919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.606054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.606229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.606268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.606488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.606519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.606656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.606687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.606900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.606931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.607151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.607182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.607323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.607342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.607624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.607644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.607859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.607878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.608060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.608091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.608294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.608326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.608588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.608618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.608797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.608828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.609023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.609240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.609455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.609474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.609655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.609686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.609891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.609922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.610126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.610157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.610346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.610383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.610670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.610711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.610884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.610903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.611135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.611154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.611347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.611367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.611461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.611480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.611668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.611699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.611889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.611920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.360 [2024-07-15 11:45:27.612197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.360 [2024-07-15 11:45:27.612228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.360 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.612490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.612521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.612752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.612783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.613065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.613096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.613353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.613384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.613656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.613687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.613814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.613844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.614177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.614208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.614494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.614526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.614658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.614677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.614802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.614821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.614947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.614966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.615179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.615198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.615315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.615335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.615442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.615461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.615643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.615662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.615891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.615910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.616917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.616935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.617113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.617144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.617352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.617384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.617581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.617613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.617874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.617893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.617982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.618967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.618987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.619087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.619106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.619283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.619303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.619568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.619587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.619763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.619782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.619942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.619965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.620068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.620087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.620312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.620332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.361 qpair failed and we were unable to recover it. 00:29:53.361 [2024-07-15 11:45:27.620460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.361 [2024-07-15 11:45:27.620479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.620573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.620591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.620761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.620780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.620980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.621010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.621199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.621229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.621430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.621461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.621734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.621754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.621866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.621884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.622814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.622845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.623048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.623079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.623211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.623243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.623581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.623752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.623783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.623976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.623995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.624183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.624213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.624455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.624488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.624764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.624795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.624986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.625005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.625110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.625129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.625370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.625389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.625635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.625654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.625830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.625860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.626003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.626034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.626358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.626378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.626654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.626694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.626912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.627050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.627086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.627277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.627309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.627507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.627537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.627719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.627738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.627935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.627965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.628171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.628202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.628396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.628428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.628622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.628652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.628918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.629078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.629108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.362 qpair failed and we were unable to recover it. 00:29:53.362 [2024-07-15 11:45:27.629320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.362 [2024-07-15 11:45:27.629353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.629488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.629519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.629656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.629687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.629879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.629910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.630108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.630139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.630355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.630386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.630526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.630556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.630767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.630797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.630939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.630969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.631162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.631192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.631312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.631344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.631535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.631567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.631694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.631725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.631835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.631865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.632039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.632070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.632329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.632361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.632478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.632508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.632670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.632705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.632887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.632918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.633127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.633157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.633311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.633343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.633646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.633676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.633898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.633928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.634146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.634177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.634370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.634402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.634717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.634747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.635005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.635036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.635284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.635316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.635620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.635640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.635826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.635845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.635966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.635985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.636161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.636191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.636414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.636447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.636725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.636756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.636876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.636906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.637092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.637122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.637314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.637347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.637586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.637616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.637813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.363 [2024-07-15 11:45:27.637844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.363 qpair failed and we were unable to recover it. 00:29:53.363 [2024-07-15 11:45:27.638037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.638068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.638272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.638303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.638559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.638590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.638775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.638795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.638972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.638992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.639163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.639206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.639346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.639378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.639593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.639624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.639900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.639931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.640235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.640274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.640546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.640577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.640712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.640743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.640939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.640969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.641244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.641547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.641577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.641790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.641820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.641977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.641997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.642227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.642246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.642369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.642392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.642565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.642723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.642929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.642960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.643101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.643131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.643332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.643364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.643510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.643541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.643727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-15 11:45:27.643758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-15 11:45:27.643956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.643987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.644174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.644205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.644437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.644469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.644726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.644757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.645060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.645090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.645276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.645307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.645618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.645649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.645784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.645814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.645937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.645957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.646143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.646162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.646333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.646352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.646465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.646485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.646590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.646609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.646832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.646863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.647058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.647088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.647278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.647310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.647513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.647544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.647746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.647777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.648086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.648116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.648332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.648365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.648493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.648525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.648751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.648770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.648891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.648934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.649192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.649224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.649366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.649398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.649657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.649687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.649890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.649910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.650150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.650170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.650291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.650322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.650516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.650547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.650750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.650792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.650963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.650982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.651161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.651182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.651308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.651328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.651494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.651514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.651775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.651899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.651918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.652109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.652140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.652282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.652315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.652524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.652554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-15 11:45:27.652830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-15 11:45:27.652861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.653057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.653088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.653279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.653312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.653445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.653476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.653678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.653697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.653866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.653896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.654106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.654137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.654329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.654360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.654643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.654940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.654971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.655165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.655184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.655288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.655327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.655473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.655505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.655777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.655807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.656006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.656170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.656200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.656410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.656442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.656569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.656600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.656802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.656833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.657056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.657076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.657283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.657303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.657410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.657429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.657660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.657678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.657861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.657880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.658046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.658077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.658285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.658317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.658517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.658547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.658825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.658857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.658987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.659018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.659203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.659234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.659451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.659482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.659778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.659810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.660039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.660079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-15 11:45:27.660311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-15 11:45:27.660344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.660475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.660505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.660729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.660760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.661032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.661226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.661357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.661573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.661737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.661991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.662021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.662216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.662247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.662473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.662504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.662633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.662664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.662922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.662952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.663161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.663192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.663380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.663412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.663616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.663647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.663864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.663895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.664113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.664143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.664321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.664353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.664634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.664665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.664800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.664831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.664964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.664994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.665189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.665219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.665454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.665485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.665671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.665702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.665912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.665931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.666863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.666895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.667004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.667034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.667291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.667323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.667437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.667671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.667702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.667901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.667932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.668232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.668275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.668406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-07-15 11:45:27.668437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-15 11:45:27.668691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.668726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.668921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.668952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.669227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.669268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.669549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.669580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.669795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.669814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.670024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.670055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.670193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.670224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.670488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.670520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.670721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.670751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.670958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.670998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.671282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.671315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.671461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.671491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.671677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.671708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.671897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.671928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.672117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.672334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.672366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.672643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.672674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.672870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.672901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.673035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.673066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.673298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.673330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.673619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.673650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.673801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.673832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.674024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.674055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.674239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.674290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.674437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.674468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.674744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.674775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.675029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.675060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.675197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.675229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.675539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.675571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.675792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.675823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.675961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.675992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.676193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.676213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-15 11:45:27.676387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-07-15 11:45:27.676407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.676637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.676656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.676756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.676776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.677015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.677045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.677224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.677263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.677385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.677416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.677629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.677660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.677875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.677906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.678090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.678113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.678221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.678240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.678424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.678443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.678681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.678713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.678926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.678957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.679140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.679159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.679278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.679299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.679477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.679497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.679705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.679735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.679878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.680117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.680148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.680433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.680466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.680662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.680693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.680875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.680905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.681120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.681151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.681409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.681441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.681678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.681709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.681934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.681978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.682157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.682176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.682305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.682336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.682451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.682482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.682667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.682698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.682819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.682849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.683117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.683148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.683419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.683455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.683646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-07-15 11:45:27.683677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-07-15 11:45:27.683875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.683906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.684058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.684088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.684346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.684379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.684674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.684705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.684849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.684880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.685067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.685098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.685222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.685343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.685494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.685524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.685809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.685840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.686050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.686081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.686361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.686381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.686559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.686578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.686836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.686866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.687054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.687086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.687288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.687326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.687583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.687613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.688071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.688102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.688394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.688425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.688612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.688643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.688931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.688962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.689111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.689142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.689341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.689362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.689482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.689512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.689636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.689667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.689963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.689994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.690180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.690211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.690499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.690531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.690733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.690764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.691039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.691058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.691289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.691309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.691413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.691432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.691608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.691627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.691889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.691930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.692127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.692158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.692359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-07-15 11:45:27.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-07-15 11:45:27.692641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.692671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.692870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.692902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.693901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.694026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.694044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.694232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.694270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.694461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.694492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.694713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.694744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.695015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.695034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.695213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.695244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.695448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.695480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.695692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.695723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.695997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.696122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.696251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.696393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.696672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.696939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.696969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.697201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.697220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.697463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.697482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.697740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.697771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.697976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.697995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.698253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.698294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.698419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.698450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.698674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.698704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.698907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.698927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.699017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.699190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.699223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.699446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.699478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.371 [2024-07-15 11:45:27.699761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.371 [2024-07-15 11:45:27.699791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.371 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.699993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.700024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.700278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.700309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.700490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.700521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.700735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.700766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.700991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.701010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.701184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.701215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.701413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.701445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.701567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.701598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.701866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.701897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.702184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.702214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.702401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.702444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.702554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.702573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.702744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.702762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.702854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.702874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.703055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.703086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.703290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.703321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.703576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.703606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.703724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.703743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.703912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.703943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.704131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.704161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.704371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.704403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.704523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.704553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.704765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.704795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.704922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.704962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.705167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.705198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.705396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.705429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.705627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.705657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.705912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.705955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.706203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.706223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.706413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.706433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.706683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.706701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.706962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.707002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.707280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.707313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.707513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.707543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.707851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.707871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.708035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.708054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.708191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.708222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.708515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.372 [2024-07-15 11:45:27.708547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.372 qpair failed and we were unable to recover it. 00:29:53.372 [2024-07-15 11:45:27.708663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.708693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.708905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.708935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.709228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.709271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.709484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.709514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.709715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.709746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.710029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.710048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.710335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.710355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.710521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.710539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.710754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.710785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.711002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.711033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.711156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.711187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.711319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.711351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.711629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.711699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.711903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.711936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.712145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.712359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.712380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.712596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.712616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.712803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.712834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.713027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.713057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.713275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.713306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.713592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.713623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.713905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.713924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.714155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.714175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.714298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.714318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.714497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.714758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.714794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.715046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.715065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.715293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.715313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.715472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.715492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.715610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.715642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.715757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.715787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.716022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.716052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.716335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.716367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.716500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.716532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.716677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.716696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.716980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.717011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.373 [2024-07-15 11:45:27.717217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.373 [2024-07-15 11:45:27.717248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.373 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.717388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.717419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.717673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.717704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.717933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.717952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.718071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.718090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.718184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.718203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.718475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.718496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.718779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.718810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.719093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.719123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.719334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.719365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.719555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.719586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.719798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.720110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.720141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.720397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.720429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.720684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.720715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.720842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.720872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.721108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.721140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.721354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.721385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.721589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.721620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.721817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.721913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.721932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.722093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.722112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.722364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.722384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.722618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.722648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.722786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.722817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.723020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.723050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.723360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.723392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.723588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.723619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.723738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.723769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.724057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.724092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.724277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.724309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.374 qpair failed and we were unable to recover it. 00:29:53.374 [2024-07-15 11:45:27.724497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.374 [2024-07-15 11:45:27.724528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.724711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.724741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.724944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.724975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.725267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.725298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.725504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.725535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.725751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.725770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.726027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.726046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.726228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.726248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.726427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.726447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.726634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.726665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.726918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.726948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.727225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.727273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.727534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.727566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.727776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.727807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.728012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.728043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.728238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.728281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.728457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.728487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.728601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.728631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.728908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.728939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.729146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.729177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.729433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.729465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.729671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.729701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.729958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.729989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.730248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.730287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.730547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.730578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.730727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.730758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.731073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.731104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.731320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.731614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.731645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.731840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.731872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.732084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.732114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.732369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.732401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.732668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.732711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.732804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.732822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.733082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.733112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.733253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.733294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.733491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.375 [2024-07-15 11:45:27.733522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.375 qpair failed and we were unable to recover it. 00:29:53.375 [2024-07-15 11:45:27.733777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.733808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.734090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.734126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.734412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.734453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.734739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.734770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.734886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.734917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.735044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.735074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.735279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.735311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.735567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.735588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.735751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.735771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.735859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.735899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.736075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.736107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.736302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.736334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.736471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.736502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.736726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.736757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.737021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.737039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.737220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.737239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.737506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.737537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.737853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.737883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.738137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.738168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.738299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.738332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.738528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.738547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.738781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.738812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.739094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.739125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.739278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.739298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.739515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.739547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.739822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.739853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.740035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.740065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.740247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.740284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.740546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.740578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.740811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.740842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.741031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.741062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.741325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.741346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.741535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.741555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.741820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.741857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.742061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.742093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.742218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.742250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.742519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.742550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.742751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.742781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.376 qpair failed and we were unable to recover it. 00:29:53.376 [2024-07-15 11:45:27.742910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.376 [2024-07-15 11:45:27.742929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.743131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.743162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.743447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.743479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.743688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.743928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.743960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.744241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.744285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.744436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.744468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.744643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.744673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.744802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.744843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.744950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.744969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.745263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.745295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.745431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.745462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.745718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.745750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.746057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.746077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.746167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.746185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.746362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.746382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.746639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.746676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.746832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.746862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.747146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.747176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.747310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.747342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.747481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.747511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.747766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.747797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.747999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.748123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.748354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.748587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.748759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.748936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.748955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.749210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.749241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.749389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.749420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.749706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.749737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.749942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.749973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.750196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.750215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.750469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.750489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.750669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.750689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.750861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.750880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.750983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.377 [2024-07-15 11:45:27.751002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.377 qpair failed and we were unable to recover it. 00:29:53.377 [2024-07-15 11:45:27.751108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.751128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.751216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.751234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.751502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.751521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.751631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.751650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.751835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.751854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.752014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.752033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.752214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.752234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.752423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.752444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.752680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.752711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.752840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.752871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.753093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.753124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.753447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.753479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.753688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.753720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.753847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.753879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.754027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.754058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.754244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.754283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.754473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.754492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.754686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.754717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.754971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.755002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.755201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.755232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.755540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.755572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.755761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.755791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.756077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.756109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.756365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.756584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.756615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.756836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.756867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.757100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.757118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.378 qpair failed and we were unable to recover it. 00:29:53.378 [2024-07-15 11:45:27.757273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.378 [2024-07-15 11:45:27.757293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.757456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.757475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.757736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.757768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.757896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.757927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.758129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.758159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.758331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.758351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.758463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.758485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.758649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.758668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.758851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.758882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.759065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.759096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.759296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.759339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.759524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.759544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.759714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.759734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.760001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.760020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.760178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.760197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.760376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.760632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.760663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.760883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.760914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.761039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.761071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.761205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.761224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.761490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.761510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.761690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.761721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.762000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.762019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.762121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.762141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.762405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.762446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.762584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.379 [2024-07-15 11:45:27.762615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.379 qpair failed and we were unable to recover it. 00:29:53.379 [2024-07-15 11:45:27.762751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.762782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.762965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.762995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.763197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.763228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.763505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.763525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.763696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.763715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.763883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.763914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.764053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.764084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.764289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.764322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.764528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.764547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.764722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.764752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.764965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.765247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.765272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.765448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.765467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.765626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.765664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.765880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.765911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.766198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.766218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.766415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.766617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.766636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.766808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.766838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.766986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.767017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.767134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.767170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.767456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.767489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.767625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.767656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.767910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.767941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.768137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.768156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.768352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.768372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.768635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.768665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.768852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.768883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.769099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.769130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.769265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.769286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.769405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.769425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.769611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.769631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.769824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.769843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.770011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.770042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.770326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.770359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.770492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.770523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.770805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.770836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.771126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.771156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.771463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.771483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.771644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.771663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.380 qpair failed and we were unable to recover it. 00:29:53.380 [2024-07-15 11:45:27.771848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.380 [2024-07-15 11:45:27.771879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.772027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.772058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.772268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.772299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.772515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.772545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.772691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.772722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.772930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.772949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.773112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.773131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.773308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.773328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.773442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.773459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.773635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.773654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.773816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.773835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.774014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.774045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.774245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.774298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.774602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.774633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.774838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.774870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.775001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.775019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.775273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.775306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.775487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.775518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.775829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.776116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.776147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.776409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.776433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.776620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.776651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.776933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.776964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.777202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.777364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.777476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.777685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.777869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.777976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.778007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.778135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.778166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.778449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.778481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.778680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.778711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.778997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.779028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.779281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.381 [2024-07-15 11:45:27.779313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.381 qpair failed and we were unable to recover it. 00:29:53.381 [2024-07-15 11:45:27.779525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.779555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.779678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.779708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.780075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.780261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.780463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.780667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.780833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.780976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.781211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.781242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.781455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.781486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.781701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.781731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.781948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.781977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.782109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.782140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.782279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.782324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.782609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.782641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.782897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.782928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.783122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.783153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.783359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.783379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.783650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.783681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.784005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.784035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.382 [2024-07-15 11:45:27.784296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.382 [2024-07-15 11:45:27.784328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.382 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.784602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.784635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.784910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.784942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.785151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.785193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.785312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.785332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.785506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.785524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.785754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.785777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.785879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.785899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.786172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.786203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.786526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.786560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.786782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.786813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.786956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.786986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.787171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.787202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.787401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.787433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.787703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.787734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.787940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.787971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.788178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.788197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.788368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.788388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.788619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.788650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.788907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.788938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.789062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.789093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.789223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.789253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.789394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.789425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.789635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.789665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.789913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.789943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.790150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.790192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.790364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.662 [2024-07-15 11:45:27.790383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.662 qpair failed and we were unable to recover it. 00:29:53.662 [2024-07-15 11:45:27.790587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.790606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.790864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.790894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.791111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.791142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.791329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.791361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.791580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.791610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.791802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.792042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.792073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.792191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.792221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.792429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.792461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.792605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.792636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.792831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.792862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.793057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.793087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.793292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.793312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.793430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.793448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.793648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.793667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.793871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.793889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.794074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.794093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.794271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.794290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.794549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.794568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.794679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.794702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.794930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.794949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.795124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.795144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.795248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.795280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.795392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.795411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.795652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.795683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.795969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.795999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.796200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.796219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.796387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.796407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.796597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.796627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.796842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.796873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.797130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.797162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.797303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.797336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.797588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.797607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.797732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.797763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.797916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.797947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.798143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.798174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.798357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.798388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.798586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.798617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.798831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.798861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.799053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.799072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.663 qpair failed and we were unable to recover it. 00:29:53.663 [2024-07-15 11:45:27.799308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.663 [2024-07-15 11:45:27.799340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.799614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.799646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.799955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.799986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.800160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.800191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.800332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.800352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.800525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.800544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.800658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.800689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.800915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.800946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.801119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.801150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.801360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.801393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.801679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.801709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.801921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.801952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.802157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.802188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.802326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.802346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.802615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.802649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.802790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.802822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.802976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.803006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.803144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.803176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.803498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.803664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.803700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.803844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.803875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.804093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.804123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.804230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.804275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.804462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.804493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.804721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.804752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.804949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.804980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.805207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.805346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.805452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.805629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.805811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.805986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.806007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.806099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.806117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.806352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.806372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.806654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.806673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.806939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.806982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.807185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.807216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.807448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.807479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.807688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.807719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.808000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.808031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.808224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.664 [2024-07-15 11:45:27.808263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.664 qpair failed and we were unable to recover it. 00:29:53.664 [2024-07-15 11:45:27.808412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.808443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.808655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.808685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.808951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.808981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.809112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.809142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.809343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.809363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.809546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.809566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.809740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.809771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.809973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.810003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.810141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.810172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.810483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.810503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.810604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.810623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.810868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.810887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.811116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.811135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.811339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.811359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.811597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.811616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.811816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.811835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.812001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.812031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.812272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.812304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.812492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.812528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.812753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.812784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.813049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.813092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.813298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.813318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.813480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.813500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.813673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.813692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.813803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.813822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.814014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.814045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.814240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.814279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.814402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.814432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.814637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.814668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.814855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.814887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.815029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.815061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.815317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.815349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.815614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.815645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.815850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.815880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.816082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.816113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.816400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.816432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.816616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.816635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.816876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.816907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.817164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.817195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.665 [2024-07-15 11:45:27.817338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.665 [2024-07-15 11:45:27.817370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.665 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.817684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.817704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.817884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.817903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.818149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.818168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.818344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.818364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.818476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.818507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.818755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.818825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.818984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.819019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.819215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.819248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.819393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.819425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.819628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.819659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.819924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.819955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.820149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.820180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.820327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.820361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.820576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.820607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.820805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.820836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.821923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.821942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.822145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.666 [2024-07-15 11:45:27.822164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.666 qpair failed and we were unable to recover it. 00:29:53.666 [2024-07-15 11:45:27.822329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.822362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.822572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.822603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.822742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.822773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.822932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.822963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.823229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.823268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.823463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.823494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.823678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.823708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.823929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.823960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.824240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.824278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.824482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.824502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.824671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.824715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.824909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.824940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.825267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.825299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.825498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.825529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.825808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.825839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.826056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.826087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.826285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.826317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.826574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.826593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.826715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.826743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.826866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.826898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.827870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.667 [2024-07-15 11:45:27.827888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.667 qpair failed and we were unable to recover it. 00:29:53.667 [2024-07-15 11:45:27.828003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.828022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.828198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.828229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.828525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.828557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.828687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.828717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.828840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.828871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.829071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.829090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.829267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.829299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.829608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.829639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.829771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.829802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.830078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.830098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.830207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.830226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.830423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.830443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.830608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.830628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.830813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.830832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.831045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.831075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.831359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.831392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.831583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.831614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.831874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.831904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.832137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.832167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.832361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.832393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.832524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.832542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.832722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.832753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.832876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.832906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.833038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.833069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.833277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.833309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.833426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.833456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.833741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.668 [2024-07-15 11:45:27.833771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.668 qpair failed and we were unable to recover it. 00:29:53.668 [2024-07-15 11:45:27.834053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.834084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.834291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.834324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.834523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.834554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.834741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.834760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.835016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.835035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.835242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.835296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.835488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.835519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.835761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.835791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.835929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.835960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.836137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.836168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.836374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.836411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.836599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.836630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.836828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.836858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.837063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.837093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.837377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.837409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.837596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.837626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.837766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.837786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.837907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.837927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.838103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.838122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.838296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.838316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.838585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.838616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.838732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.838763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.838995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.839026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.839225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.839262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.839417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.839438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.839611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.839630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.839797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-15 11:45:27.839816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.669 qpair failed and we were unable to recover it. 00:29:53.669 [2024-07-15 11:45:27.839914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.839932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.840096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.840116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.840282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.840302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.840465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.840496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.840650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.840681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.840823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.840854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.841044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.841074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.841274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.841306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.841499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.841517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.841700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.841730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.841857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.841888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.842087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.842118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.842339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.842359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.842549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.842580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.842839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.842870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.843015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.843046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.843184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.843215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.843418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.843438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.843653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.843684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.843941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.843971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.844311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.844330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.844435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.844454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.844686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.844897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.844920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.845101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.845121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.845361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.845380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.845550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.845570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.670 qpair failed and we were unable to recover it. 00:29:53.670 [2024-07-15 11:45:27.845763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-15 11:45:27.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.845875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.845893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.846003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.846021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.846274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.846306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.846445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.846476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.846622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.846652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.846843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.846873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.847083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.847114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.847252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.847278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.847383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.847402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.847577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.847597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.847840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.847870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.848056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.848086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.848290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.848322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.848458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.848489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.848826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.848857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.849121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.849151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.849350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.849382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.849659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.849691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.849895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.850069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.850100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.850286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.850318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.850589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.850619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.850853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.850883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.851138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.851168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.851355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.851388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.851511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.851541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.671 [2024-07-15 11:45:27.851748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-15 11:45:27.851779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.671 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.851967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.851998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.852308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.852341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.852549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.852579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.852863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.852894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.853089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.853120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.853241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.853266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.853525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.853544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.853688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.853718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.853908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.853944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.854148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.854179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.854395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.854426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.854566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.854596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.854857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.854888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.855008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.855047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.855274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.855294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.855400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.855419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.855612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.855631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.855889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.855908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.856085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.856104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.856332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.856364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.856607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.856637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.856922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.856952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.857097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.857128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.857278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.857309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.857502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.857532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.857793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.857812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.857936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.672 [2024-07-15 11:45:27.857954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.672 qpair failed and we were unable to recover it. 00:29:53.672 [2024-07-15 11:45:27.858121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.858140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.858361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.858392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.858519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.858550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.858777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.858808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.859011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.859042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.859297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.859323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.859430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.859450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.859684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.859702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.859869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.859888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.860149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.860179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.860298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.860331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.860529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.860560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.860788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.860807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.860972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.860991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.861185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.861205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.861383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.861403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.861568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.861587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.861754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.861773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.861949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.861968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.862077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.862108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.862363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.862395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.862618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.673 [2024-07-15 11:45:27.862640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.673 qpair failed and we were unable to recover it. 00:29:53.673 [2024-07-15 11:45:27.862879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.862898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.863096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.863114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.863290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.863310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.863494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.863526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.863838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.863869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.864072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.864103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.864320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.864355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.864668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.864710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.864904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.864924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.865182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.865218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.865357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.865391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.865597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.865629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.865850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.865882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.866077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.866109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.866305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.866338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.866553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.866586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.866713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.866746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.866941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.866971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.867107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.867139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.867269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.867302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.867426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.867459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.867719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.867752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.868038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.868069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.868270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.868293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.868472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.868492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.868614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.868635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.868906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.868937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.674 [2024-07-15 11:45:27.869149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.674 [2024-07-15 11:45:27.869182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.674 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.869384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.869406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.869585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.869606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.869776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.869809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.869992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.870023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.870213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.870245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.870463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.870495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.870614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.870646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.870851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.870882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.871169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.871201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.871333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.871366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.871652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.871684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.871831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.871868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.872130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.872162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.872359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.872379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.872557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.872577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.872753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.872784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.873045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.873077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.873221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.873251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.873470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.873490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.873587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.873606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.873867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.873907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.874126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.874156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.874300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.874332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.874471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.874503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.874691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.874880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.874899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.875075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.675 [2024-07-15 11:45:27.875094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.675 qpair failed and we were unable to recover it. 00:29:53.675 [2024-07-15 11:45:27.875203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.875222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.875468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.875487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.875678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.875697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.875816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.875835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.875932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.875952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.876054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.876074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.876164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.876183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.876429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.876450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.876625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.876644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.876815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.876846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.877061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.877092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.877227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.877266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.877468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.877500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.877691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.877720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.877923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.877954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.878166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.878197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.878402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.878434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.878562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.878593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.878786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.878828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.878927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.879180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.879211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.879337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.879369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.879569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.879600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.879802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.879833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.879987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.880023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.880337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.880475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.880506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.676 qpair failed and we were unable to recover it. 00:29:53.676 [2024-07-15 11:45:27.880693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.676 [2024-07-15 11:45:27.880712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.880910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.880929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.881025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.881043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.881284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.881357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.881568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.881603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.881865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.881897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.882118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.882149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.882387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.882421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.882615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.882646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.882774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.882794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.882978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.883147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.883402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.883641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.883799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.883942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.883972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.884166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.884196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.884491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.884523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.884782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.884813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.884946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.884976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.885235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.885274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.885472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.885502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.885772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.885803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.885996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.886027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.886155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.886195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.886407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.886441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.886560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.886591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.677 [2024-07-15 11:45:27.886846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.677 [2024-07-15 11:45:27.886878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.677 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.887012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.887043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.887297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.887330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.887513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.887532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.887772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.887791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.887981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.888012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.888226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.888246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.888414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.888449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.888638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.888669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.888806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.888837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.889035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.889065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.889203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.889235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.889428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.889459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.889744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.889776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.889898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.889928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.890044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.890075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.890291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.890322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.890579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.890610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.890736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.890767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.890974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.891906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.891925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.892094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.892127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.892267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.678 [2024-07-15 11:45:27.892299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.678 qpair failed and we were unable to recover it. 00:29:53.678 [2024-07-15 11:45:27.892480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.892510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.892819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.892849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.892980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.893010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.893275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.893421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.893440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.893642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.893876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.893906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.894104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.894135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.894473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.894621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.894659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.894866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.894897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.895087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.895118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.895330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.895362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.895495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.895526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.895654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.895684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.895908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.895940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.896114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.896390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.896410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.896548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.896567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.896770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.896789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.896983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.897186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.679 [2024-07-15 11:45:27.897205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.679 qpair failed and we were unable to recover it. 00:29:53.679 [2024-07-15 11:45:27.897398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.897419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.897713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.897743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.897931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.897962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.898889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.898907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.899029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.899048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.899297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.899317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.899482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.899501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.899733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.899752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.899991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.900022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.900311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.900332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.900446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.900465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.900700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.900731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.900857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.900888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.901150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.901180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.901443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.901475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.901633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.901664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.901789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.901819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.902068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.902087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.902209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.902392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.902436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.902689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.902721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.902857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.902887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.680 qpair failed and we were unable to recover it. 00:29:53.680 [2024-07-15 11:45:27.903047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.680 [2024-07-15 11:45:27.903084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.903277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.903310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.903529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.903560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.903751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.903782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.903967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.903998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.904188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.904218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.904489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.904521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.904657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.904676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.904958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.904977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.905066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.905085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.905348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.905380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.905640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.905671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.905869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.905900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.906054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.906084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.906384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.906416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.906562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.906581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.906796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.906827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.907031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.907061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.907241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.907282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.907477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.907496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.907767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.907798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.908026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.908057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.908249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.908289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.908433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.908473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.908734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.908753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.908984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.909004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.909166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.681 [2024-07-15 11:45:27.909186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.681 qpair failed and we were unable to recover it. 00:29:53.681 [2024-07-15 11:45:27.909369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.909390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.909569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.909588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.909703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.909722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.909935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.910052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.910083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.910288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.910320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.910505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.910546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.910660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.910927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.910958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.911214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.911245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.911392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.911424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.911612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.911642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.911767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.911798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.911940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.911976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.912180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.912210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.912416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.912640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.912671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.912791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.913874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.913893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.914061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.914081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.914187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.914206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.914378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.914398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.682 [2024-07-15 11:45:27.914591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.682 [2024-07-15 11:45:27.914610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.682 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.914870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.914889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.915081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.915277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.915310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.915435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.915466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.915730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.915761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.915993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.916107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.916391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.916524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.916776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.916955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.916986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.917123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.917372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.917405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.917540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.917570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.917773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.917804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.917948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.917978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.918163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.918194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.918447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.918480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.918760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.918779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.918879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.918899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.918993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.919011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.919169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.919188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.919371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.919403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.919662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.919692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.919948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.919978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.920170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.920207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.920500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.683 [2024-07-15 11:45:27.920544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.683 qpair failed and we were unable to recover it. 00:29:53.683 [2024-07-15 11:45:27.920722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.920741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.920908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.920939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.921192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.921223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.921430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.921462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.921692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.921722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.922006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.922036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.922235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.922274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.922479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.922510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.922760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.922780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.922966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.922985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.923247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.923287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.923405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.923436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.923642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.923674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.923877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.923908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.924933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.924964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.925218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.925248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.925531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.925563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.925862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.925893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.926024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.926055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.926275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.926307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.926563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.684 [2024-07-15 11:45:27.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.684 qpair failed and we were unable to recover it. 00:29:53.684 [2024-07-15 11:45:27.926698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.926729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.926992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.927124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.927336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.927565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.927793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.927942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.928191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.928221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.928384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.928404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.928637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.928656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.928899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.928918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.929079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.929103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.929290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.929322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.929524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.929556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.929699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.929730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.929957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.930251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.930289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.930575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.930606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.930717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.930736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.930992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.931022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.931218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.931249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.931515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.931546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.931799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.931831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.932081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.932100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.932237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.932262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.932522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.932541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.932648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.685 [2024-07-15 11:45:27.932667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.685 qpair failed and we were unable to recover it. 00:29:53.685 [2024-07-15 11:45:27.932788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.932807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.933926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.933957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.934219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.934250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.934463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.934495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.934636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.934667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.934942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.934961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.935220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.935266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.935401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.935432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.935569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.935600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.935886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.935916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.936119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.936149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.936472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.936505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.936718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.936749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.937007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.937038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.937250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.937289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.937553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.937573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.937765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.686 [2024-07-15 11:45:27.937784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.686 qpair failed and we were unable to recover it. 00:29:53.686 [2024-07-15 11:45:27.937961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.937981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.938103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.938122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.938303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.938345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.938603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.938633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.938817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.938847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.938977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.938996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.939176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.939212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.939480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.939511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.939765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.939796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.940083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.940114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.940368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.940401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.940586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.940617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.940803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.940835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.941045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.941065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.941243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.941300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.941513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.941544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.941695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.941726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.941929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.941960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.942158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.942190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.942373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.942405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.942535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.942555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.942732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.942763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.943039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.943069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.943272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.943317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.943573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.943610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.943735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.943957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:45:27.943988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.687 qpair failed and we were unable to recover it. 00:29:53.687 [2024-07-15 11:45:27.944241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.944281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.944562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.944582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.944815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.944834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.945129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.945148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.945310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.945330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.945436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.945467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.945580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.945610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.945811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.945842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.946123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.946154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.946285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.946317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.946596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.946637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.946745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.946765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.946970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.947001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.947203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.947234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.947430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.947461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.947614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.947650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.947795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.947814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.948048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.948078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.948283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.948315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.948517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.948547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.948822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.948853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.949039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.949070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.949292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.949325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.949611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.949647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.949847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.949878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.950014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.950045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.950273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.950305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.688 [2024-07-15 11:45:27.950560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.688 [2024-07-15 11:45:27.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.688 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.950779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.950798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.950976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.951007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.951230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.951269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.951529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.951547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.951670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.951689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.951858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.951889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.952142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.952173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.952384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.952418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.952602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.952830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.952860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.953062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.953093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.953291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.953323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.953471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.953502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.953647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.953678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.953952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.953983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.954246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.954285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.954568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.954600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.954788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.954818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.955953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.955983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.956166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.956460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.956492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.956689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.689 [2024-07-15 11:45:27.956708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.689 qpair failed and we were unable to recover it. 00:29:53.689 [2024-07-15 11:45:27.956816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.956852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.957087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.957117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.957308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.957340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.957588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.957817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.957837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.958075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.958105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.958371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.958413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.958579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.958599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.958758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.958950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.958981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.959233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.959274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.959468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.959487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.959651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.959682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.959819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.959848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.960930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.960948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.961131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.961161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.961355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.961388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.961584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.961615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.961797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.961816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.961957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.961988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.962172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.962203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.962407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.962439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.690 [2024-07-15 11:45:27.962648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.690 [2024-07-15 11:45:27.962679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.690 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.962864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.962883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.963093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.963123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.963311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.963343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.963593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.963612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.963716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.963736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.963941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.963971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.964250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.964289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.964568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.964599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.964723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.964753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.964943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.965157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.965176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.965355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.965375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.965475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.965498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.965758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.965778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.965958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.965977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.966097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.966127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.966382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.966414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.966610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.966629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.966863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.966894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.967108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.967426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.967457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.967674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.967693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.967867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.967886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.968051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.968081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.968334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.968515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.968546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.968690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.691 [2024-07-15 11:45:27.968721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.691 qpair failed and we were unable to recover it. 00:29:53.691 [2024-07-15 11:45:27.968909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.968940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.969198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.969229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.969487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.969519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.969802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.969833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.970088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.970119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.970318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.970359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.970565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.970585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.970761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.970780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.970956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.970987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.971215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.971246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.971437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.971468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.971653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.971684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.971930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.972001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.972220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.972271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.972484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.972515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.972806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.972837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.973128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.973160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.973362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.973395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.973590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.973621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.973810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.973841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.974040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.974071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.692 [2024-07-15 11:45:27.974267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.692 [2024-07-15 11:45:27.974299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.692 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.974540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.974571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.974694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.974724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.974856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.974886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.975087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.975118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.975426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.975459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.975743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.975773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.975911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.975942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.976208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.976238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.976491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.976513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.976805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.977091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.977121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.977316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.977348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.977605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.977624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.977788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.977820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.978099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.978130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.978331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.978364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.978566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.978585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.978858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.978893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.979081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.979112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.979380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.979412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.979619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.979641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.979823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.979843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.980072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.980091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.980289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.980309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.980558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.980593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.980782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.980812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.981008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.693 [2024-07-15 11:45:27.981039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.693 qpair failed and we were unable to recover it. 00:29:53.693 [2024-07-15 11:45:27.981178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.981208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.981357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.981377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.981566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.981585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.981745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.981767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.981951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.981982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.982214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.982245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.982390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.982421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.982595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.982626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.982860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.982879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.983934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.983953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.984201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.984220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.984411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.984439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.984640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.984660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.984845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.984875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.985136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.985167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.985458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.985490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.985762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.985793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.986063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.986094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.986396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.986428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.986651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.986918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.986949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.987164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.987195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.694 [2024-07-15 11:45:27.987381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.694 [2024-07-15 11:45:27.987412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.694 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.987693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.987724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.987926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.987958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.988218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.988248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.988447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.988478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.988689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.988719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.988996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.989016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.989180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.989199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.989355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.989375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.989586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.989616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.989819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.989850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.990045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.990076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.990348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.990380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.990588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.990607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.990784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.990815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.991040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.991071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.991176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.991207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.991414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.991446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.991636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.991667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.991925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.992053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.992073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.992245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.992285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.992471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.992502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.992648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.992679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.992879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.992910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.993169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.993201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.993500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.993533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.993668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.695 [2024-07-15 11:45:27.993688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.695 qpair failed and we were unable to recover it. 00:29:53.695 [2024-07-15 11:45:27.993867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.993907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.994129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.994159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.994281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.994314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.994527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.994558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.994746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.994777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.995040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.995059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.995223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.995243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.995410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.995441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.995720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.995750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.995878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.995909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.996118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.996148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.996277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.996308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.996569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.996599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.996792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.996811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.997061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.997091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.997297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.997335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.997650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.997682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.997823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.997854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.998033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.998052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.998239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.998296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.998538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.998568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.998768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.998799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.999109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.999139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.999353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.999385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.999642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.999673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:27.999951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:27.999989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:28.000276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:28.000309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.696 qpair failed and we were unable to recover it. 00:29:53.696 [2024-07-15 11:45:28.000596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.696 [2024-07-15 11:45:28.000627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.000899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.000930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.001133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.001163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.001368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.001400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.001655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.001686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.001871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.001891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.002086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.002105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.002379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.002410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.002716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.002747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.002929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.002948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.003215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.003246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.003410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.003442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.003559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.003590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.003799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.003818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.004048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.004080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.004289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.004321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.004459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.004489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.004802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.004831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.005087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.005118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.005319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.005351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.005547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.005566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.005732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.005894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.005925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.006179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.006210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.006434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.006465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.006594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.006613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.006841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.006871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.007077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.697 [2024-07-15 11:45:28.007108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.697 qpair failed and we were unable to recover it. 00:29:53.697 [2024-07-15 11:45:28.007251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.007296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.007550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.007581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.007787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.007806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.007967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.007986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.008162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.008193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.008328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.008361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.008573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.008604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.008887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.008919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.009116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.009135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.009270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.009302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.009582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.009613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.009811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.009841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.010051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.010083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.010369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.010407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.010552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.010583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.010768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.010799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.011009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.011028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.011152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.011172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.011299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.011318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.011552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.011571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.011817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.698 [2024-07-15 11:45:28.011836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.698 qpair failed and we were unable to recover it. 00:29:53.698 [2024-07-15 11:45:28.012064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.012083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.012261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.012281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.012458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.012478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.012642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.012674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.012857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.012888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.013015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.013046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.013329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.013362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.013478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.013509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.013710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.014012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.014042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.014237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.014285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.014506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.014526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.014755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.014774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.014866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.014884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.015068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.015229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.015270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.015592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.015612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.015793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.015824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.016010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.016040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.016239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.016282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.016528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.016560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.016776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.016807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.017017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.017036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.017214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.017244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.017446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.017478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.017662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.017692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.017888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.017907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.699 [2024-07-15 11:45:28.018140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.699 [2024-07-15 11:45:28.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.699 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.018399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.018431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.018710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.018740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.019061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.019092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.019299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.019331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.019611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.019630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.019844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.019864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.020875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.020906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.021060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.021090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.021233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.021273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.021404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.021434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.021636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.021666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.021788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.022064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.022083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.022349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.022369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.022620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.022640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.022802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.022822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.022993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.023012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.023265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.023496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.023526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.023721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.023740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.023891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.023927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.024211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.700 [2024-07-15 11:45:28.024241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.700 qpair failed and we were unable to recover it. 00:29:53.700 [2024-07-15 11:45:28.024385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.024420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.024639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.024669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.024945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.024964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.025213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.025247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.025544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.025586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.025906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.025925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.026111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.026130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.026305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.026325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.026502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.026521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.026700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.026719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.026969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.027003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.027268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.027301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.027453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.027484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.027785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.027815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.028001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.028021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.028252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.028289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.028475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.028506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.028704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.028723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.029032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.029063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.029351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.029382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.029679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.029710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.029853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.029884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.030123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.030154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.030434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.030466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.030621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.030651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.030906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.030937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.031223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.031262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.701 [2024-07-15 11:45:28.031381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.701 [2024-07-15 11:45:28.031412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.701 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.031627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.031657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.031915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.031946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.032164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.032195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.032484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.032516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.032824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.032855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.033057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.033088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.033206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.033236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.033390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.033422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.033704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.033734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.033893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.033913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.034168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.034187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.034334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.034605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.034636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.034855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.034874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.035049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.035079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.035283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.035315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.035514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.035550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.035775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.035953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.035984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.036112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.036142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.036431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.036463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.036650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.036681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.037012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.037043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.037342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.037373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.037673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.037704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.037922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.037952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.038204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.038235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.038485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.702 [2024-07-15 11:45:28.038517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.702 qpair failed and we were unable to recover it. 00:29:53.702 [2024-07-15 11:45:28.038704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.038736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.038947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.038978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.039284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.039316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.039450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.039481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.039602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.039632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.039913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.039944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.040119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.040149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.040334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.040365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.040615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.040634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.040766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.040797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.040986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.041016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.041151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.041181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.041407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.041439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.041645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.041675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.041899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.041919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.042114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.042134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.042303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.042323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.042497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.042528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.042652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.042939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.042970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.043274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.043306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.043491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.043523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.043740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.043770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.044001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.044032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.044266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.044298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.044415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.044446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.044632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.044663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.044880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.044911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.045108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.045145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.703 [2024-07-15 11:45:28.045438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.703 [2024-07-15 11:45:28.045470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.703 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.045704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.045723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.045884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.045903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.046085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.046115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.046243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.046296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.046434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.046465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.046649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.046680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.046882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.046912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.047108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.047139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.047396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.047428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.047615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.047646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.047855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.047886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.048069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.048100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.048367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.048388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.048587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.048606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.048799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.048817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.048977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.049175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.049205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.049410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.049443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.049652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.049682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.049809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.049840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.050051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.050081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.050335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.050368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.050654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.050685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.050907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.050938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.051197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.051228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.051499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.051531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.704 qpair failed and we were unable to recover it. 00:29:53.704 [2024-07-15 11:45:28.051783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.704 [2024-07-15 11:45:28.051802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.052095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.052125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.052311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.052343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.052494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.052513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.052689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.052731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.052916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.052946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.053146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.053176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.053409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.053443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.053644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.053663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.053784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.053820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.053953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.053984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.054183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.054213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.054516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.054555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.054701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.054875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.054905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.055020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.055051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.055265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.055296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.055553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.055584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.055704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.055744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.055906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.055925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.056037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.056067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.056219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.056249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.056456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.056487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.056747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.056777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.057006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.057037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.057223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.057264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.057530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.705 [2024-07-15 11:45:28.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.705 qpair failed and we were unable to recover it. 00:29:53.705 [2024-07-15 11:45:28.057672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.057691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.057871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.057901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.058037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.058067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.058325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.058357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.058502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.058532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.058679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.058710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.058902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.058932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.059123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.059153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.059283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.059315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.059502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.059533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.059672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.059692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.059956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.059986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.060199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.060230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.060424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.060455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.060724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.060743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.060981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.061001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.061159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.061178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.061384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.061404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.061584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.061604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.061815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.061845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.062033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.062064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.062197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.062228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.062399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.062431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.062689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.062720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.062853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.062872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.063034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.063080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.063301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.063334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.063526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.063743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.063773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.706 [2024-07-15 11:45:28.063888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.706 [2024-07-15 11:45:28.063918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.706 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.064120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.064150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.064340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.064372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.064575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.064606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.064747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.064778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.064975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.064995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.065101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.065120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.065408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.065428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.065549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.065578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.065707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.065738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.066967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.066986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.067095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.067115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.067275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.067294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.067457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.067477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.067684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.067716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.067849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.067879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.068016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.068047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.068313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.068345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.068548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.068580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.068782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.069017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.069036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.069214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.069233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.069368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.069388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.069648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.069667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.069842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.069862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.070029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.070060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-15 11:45:28.070323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-15 11:45:28.070355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.070487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.070517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.070738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.070769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.070882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.070927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.071132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.071162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.071352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.071395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.071526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.071558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.071697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.071728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.072043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.072204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.072503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.072729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.072981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.073012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.073214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.073244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.073510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.073541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.073770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.073800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.073912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.073931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.074099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.074118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.074362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.074393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.074598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.074629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.074755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.074774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.074953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.075151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.075181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.075394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.075426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.075617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.075648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.075851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.075881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-15 11:45:28.076937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-15 11:45:28.076957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.077137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.077156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.077322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.077341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.077538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.077558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.077752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.077771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.077946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.077965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.078146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.078445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.078575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.078701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.078823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.078982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.079001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.079193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.079224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.079453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.079491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.079720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.079752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.079872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.079903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.080097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.080127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.080273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.080293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.080578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.080610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.080876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.080906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.081107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.081137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.081366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.081397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.081587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.081618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.081734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.081765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.081912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.081948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.082060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.082079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.082251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.082275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.082545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.082577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.082769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.082789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.082957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.082976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.083139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.083158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.083350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-15 11:45:28.083370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-15 11:45:28.083482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.083502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.083603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.083623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.083724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.083743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.083840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.083857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.083960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.083979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.084851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.084870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.085942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.085960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.086126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.086356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.086388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.086658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.086690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.086944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.086963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.087082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.087105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.087369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.087390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.087549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.087568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.087782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.087800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.087912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.087931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.088020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-15 11:45:28.088038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-15 11:45:28.088128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.088147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.088268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.088289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.088535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.088566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.088825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.088856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.089055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.089085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.089214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.089245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.089440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.089472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.089597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.089627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.089776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.089808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.090009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.090040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.090176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.090206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.090510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.090543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.090747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.090777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.090970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.091876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.091894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.092017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.092036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.092223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.092243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.092413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.092445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.092701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.092731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.092978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.092997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.093161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.093340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.093359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.093561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.093592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.093869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.093900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.094136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.094166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.094358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.094390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.094592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.094623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.094831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.094850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.095037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.095081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-15 11:45:28.095376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-15 11:45:28.095414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.095638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.095668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.095888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.095918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.096111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.096141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.096353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.096571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.096601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.096821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.096852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.096986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.097016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.097209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.097240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.097487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.097518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.097731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.097761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.097962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.098965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.098996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.099187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.099217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.099466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.099630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.099661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.099922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.099953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.100149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.100169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.100269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.100288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.100418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.100545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.100564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.100801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.100832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.101090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.101160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.101485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.101522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.101672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.101703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.101825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.101856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.102063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.102093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.102349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.102385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.102674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.102695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-15 11:45:28.102812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-15 11:45:28.102831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.102938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.102957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.103069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.103088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.103188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.103223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.103377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.103409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.103611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.103642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.103962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.103994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.104201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.104232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.104522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.104554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.104749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.104769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.104932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.104953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-15 11:45:28.105082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-15 11:45:28.105101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.105306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.105339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.105552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.105586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.105798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.105829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.106085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.106116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.106315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.106336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.106438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.106458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.106643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.106662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.106837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.106867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.107945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.107965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.108147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.108167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.108435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.108455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.108562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.108581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.108702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.108722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.108899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.108918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.109895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.109915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.110076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.110095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.110208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.110240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.110409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.110439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.110668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.110699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.110981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.111012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.111277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.111310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.111594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.111625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.111754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.111785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.111989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.112019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.112264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.112284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.112439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.112458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.112624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.112643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.112831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.112862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.113113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.113143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.113333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.113364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.113549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.113580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.113829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.113848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.114095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.114125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.114315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.114347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.114495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.114526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.114804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.114834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.115103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.115133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.115420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.115452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.115651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.115682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.115881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.115912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.116048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.116079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.116287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.116319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.116585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.116616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.116809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.116839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.117039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.117070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.117216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.117248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.117542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.117573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.117797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.117829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.117963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.117984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.118176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.118206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.118497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.118531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-15 11:45:28.118765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-15 11:45:28.118801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.118946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.118976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.119172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.119282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.119301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.119464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.119484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.119662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.119692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.119826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.119856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.120077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.120096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.120279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.120311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.120612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.120643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.120773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.120804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.120992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.121022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.121228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.121266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.121474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.121505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.121725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.121757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.121928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.121947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.122395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.122416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.122538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.122558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.122766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.122785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.122907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.122926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.123186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.123217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.123340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.123371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.123597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.123628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.123762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.123793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.123973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.123992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.124156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.124175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.124350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.124371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.124614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.124645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.124858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.124889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.125116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.125146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.125426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.125446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.125617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.125636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.125829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.125859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.126062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.126093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.126229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.126276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.126424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.126456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.126646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.126677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.126791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.126822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.127100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.127120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.127323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.127346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.127450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.127470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.127726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.127757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.127981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.128011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.128274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.128306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.128447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.128596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.128627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.128774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.128793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.128985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.129017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.129146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.129177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.129430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.129463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.129737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.129767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.130052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.130089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.130288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.130321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.130447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.130478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.130662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.130693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.130906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.130937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.131206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.131237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.131469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.131500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.131700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.131731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.131863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.131882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.131985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.132004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.132191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.132210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.132469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.132488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.132689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.132708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.132825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.132859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.133014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.133044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.133274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.133306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.133579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.133611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.133809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.133840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.134816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.134835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.135074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.135105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.135395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.135427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.135616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-15 11:45:28.135647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-15 11:45:28.135850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.135882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.136104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.136142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.136279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.136298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.136477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.136497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.136714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.136734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.136837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.136856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.137053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.137083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.137232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.137270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.137496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.137527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.137719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.137750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.137891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.137922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.138105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.138124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.138309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.138341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.138482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.138514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.138702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.138733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.138870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.138901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.139099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.139130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.139331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.139362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.139631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.139662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.139788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.139820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.140029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.140049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.140287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.140319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.140439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.140470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.140594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.140624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.140916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.140947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.141155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.141174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.141296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.141315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.141494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.141514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.141695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.141726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.141927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.141947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.142107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.142126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.142364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.142405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.142596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.142627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.142915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.142946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.143080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.143100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.143199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.143219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.143519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.143550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.143806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.143838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.143989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.144020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.144203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.144222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.144397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.144429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.144649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.144685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.144944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.144963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.145244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.145270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.145385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.145405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.145523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.145741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.145759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.145941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.145960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.146155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.146185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.146321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.146353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.146555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.146586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.146901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.146932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.147193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.147224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.147501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.147533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.147720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.147751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.147989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.148021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.148261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.148293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.148555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.148586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.148848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.148879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.149117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.149148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.149336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.149367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.149600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.149631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.149754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.149774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.150005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.150037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.150222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.150272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.150460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.150491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-15 11:45:28.150677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-15 11:45:28.150709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.150892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.150922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.151184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.151216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.151401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.151421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.151653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.151684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.151867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.151898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.152086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.152105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.152372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.152404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.152692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.152724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.153967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.153987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.154160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.154183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.154347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.154380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.154527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.154559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.154816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.154847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.155071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.155104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.155293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.155313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.155600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.155631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.155764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.155795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.155941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.155960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.156188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.156207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.156389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.156409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.156519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.156550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.156736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.156766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.156950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.156970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.157239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.157279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.157539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.157571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.157709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.157728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.157983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.158002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.158116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.158135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.158366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.158386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.158569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.158600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.158807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.158837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.159091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.159111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.159355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.159387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.159518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.159550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.159747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.159778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.160043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.160226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.160246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.160421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.160453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.160761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.160792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.160925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.160956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.161141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.161172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.161358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.161391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.161590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.161621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.161949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.161981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.162172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.162203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.162422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.162454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.162590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.162621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.162819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.162849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.163069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.163101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.163294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.163317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.163553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.163584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.163850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.163881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.164188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.164218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.164465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.164497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.164692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.164712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.164895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.164926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.165124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.165155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.165352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.165385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.165496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.165516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.165747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.165767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.165865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.165884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.166044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.166088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.166287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.166319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.166534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.166565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.166782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.166813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.166941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.166961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.167153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.167185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.167447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.167479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.167669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.167700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.167970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.168002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.168232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.168269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.168471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.168685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.168704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-15 11:45:28.168963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-15 11:45:28.168994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.169128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.169159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.169466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.169499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.169729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.169749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.170017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.170047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.170180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.170211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.170350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.170381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.170667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.170697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.170954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.170985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.171135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.171153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.171404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.171424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.171587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.171618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.171875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.171906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.172089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.172109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.172227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.172247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.172506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.172542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.172756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.172792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.173904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.173934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.174119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.174149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.174325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.174359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.174492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.174511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.174615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.174634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.174865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.174884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.175098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.175128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.175332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.175352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.175515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.175535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.175830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.175860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.176144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.176174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.176340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.176372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.176595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.176626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.176813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.176844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.177129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.177159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.177373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.177393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.177623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.177643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.177751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.177770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.178030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.178061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.178279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.178436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.178468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.178808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.178878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.179112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.179147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.179470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.179506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.179755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.179786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.180062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.180093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.180294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.180327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.180518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.180549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.180759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.180790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.181086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.181117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.181329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.181351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.181612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.181648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.181794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.181825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.182041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.182073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.182204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.182234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.182393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.182425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.182652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.182683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.182886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.182906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.183028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.183060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.183284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.183316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.183518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.183549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.183740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.183771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.183918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.183937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.184184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.184214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.184449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.184480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.184604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.184635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.184888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.184919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.185106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.185137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.185353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.185385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-07-15 11:45:28.185657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-07-15 11:45:28.185676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.185931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.185967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.186096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.186127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.186329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.186362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.186582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.186613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.186743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.187029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.187061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.187217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.187248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.187414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.187446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.187705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.187736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.187923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.187953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.188094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.188124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.188245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.188304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.188521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.188541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.188706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.188725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.188855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.188886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.189073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.189105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.189282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.189302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.189561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.189592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.189789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.189820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.190005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.190036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.190232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.190285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.190434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.190466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.190780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.190811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.190998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.191028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.191156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.191176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.191413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.191445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.191750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.191781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.191985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.192016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.192286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.192318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.192506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.192537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.192737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.192768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.193035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.193065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.193269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.193301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.193529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.193559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.193771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.193790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.193901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.193921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.194099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.194129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.194326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.194359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.194509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.194540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.194692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.194724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.194982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.195013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.195277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.195309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.195545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.195576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.195800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.195832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.196030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-07-15 11:45:28.196062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-07-15 11:45:28.196268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.196288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.196500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.196531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.196702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.196909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.196940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.197156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.197175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.197301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.197322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.197529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.197552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.197744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.197774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.197930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.197961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.198172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.198203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.198442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.198462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.198630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.198649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.198782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.198801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.199036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.199056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.199228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.199248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.199420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.199452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.199579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.199611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.199808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.199839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.200028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.200059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.200316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.200336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.200516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.200535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.200640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.200659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.200839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.200859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.201069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.201099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.201300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.201332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.201475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.201506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.201637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.201669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.201871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.201901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.202119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.202139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.202326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.202345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.202606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.202625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.202789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.202809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.203045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.203075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.203291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.204099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.204126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.204307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.204328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.204605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.204625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.204873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.204893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.205071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.205090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.205250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.205274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.205395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.006 [2024-07-15 11:45:28.205414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.006 qpair failed and we were unable to recover it. 00:29:54.006 [2024-07-15 11:45:28.205508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.205528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.205636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.205655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.205832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.205851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.205952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.205972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.206896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.206916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.207978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.207997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.208179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.208198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.208403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.208423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.208603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.208622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.208857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.208876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.209069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.209204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.209611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.209768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.209977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.210009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.210205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.210236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.210372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.210392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.210648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.210926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.210945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.211107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.211127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.211286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.211307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.211434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.211453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.211633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.211652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.211906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.211926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.212051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.212186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.212319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.212797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.007 [2024-07-15 11:45:28.212991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.007 [2024-07-15 11:45:28.213009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.007 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.213184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.213204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.213437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.213474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.213661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.213691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.213887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.213919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.214053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.214089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.214213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.214244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.214387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.214419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.214705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.214736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.214875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.214907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.215098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.215129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.215384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.215416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.215551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.215593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.215786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.215805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.215982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.216180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.216339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.216656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.216844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.216947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.216966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.217145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.217164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.217286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.217307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.217496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.217515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.217753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.217784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.217913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.217944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.218146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.218318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.218351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.218550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.218582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.218796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.218827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.219872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.008 [2024-07-15 11:45:28.219902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.008 qpair failed and we were unable to recover it. 00:29:54.008 [2024-07-15 11:45:28.220016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.220036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.220258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.220278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.220375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.220394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.220561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.220581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.220843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.220883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.221145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.221164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.221282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.221302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.221577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.221596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.221830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.221861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.222914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.222934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.223139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.223170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.223354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.223387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.223628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.223659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.223850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.223881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.224013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.224033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.224267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.224383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.224403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.224596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.224627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.224896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.224938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.225064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.225083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.225272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.225304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.225531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.225562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.225827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.225859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.226164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.226195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.226488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.226521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.226742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.226963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.226995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.227276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.227297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.227470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.227489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.227657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.227689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.227999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.228030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.228299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.228594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.228625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.228890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.228920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.229193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.009 [2024-07-15 11:45:28.229224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.009 qpair failed and we were unable to recover it. 00:29:54.009 [2024-07-15 11:45:28.229492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.229524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.229752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.229783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.229908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.229938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.230153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.230183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.230377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.230410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.230598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.230629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.230884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.230915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.231137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.231168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.231462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.231483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.231745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.231776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.231905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.231941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.232071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.232102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.232416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.232448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.232558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.232577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.232825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.232855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.232979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.233161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.233386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.233588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.233733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.233915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.233934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.234096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.234116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.234305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.234324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.234430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.234449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.234644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.234674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.234933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.234964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.235154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.235374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.235407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.235691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.235722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.235848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.235879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.236092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.236123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.236338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.236369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.236505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.236537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.236724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.236755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.237058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.237089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.237279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.237311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.237516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.237553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.237932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.238153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.238188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.238399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.238434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.238583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.010 [2024-07-15 11:45:28.238614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.010 qpair failed and we were unable to recover it. 00:29:54.010 [2024-07-15 11:45:28.238815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.238846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.238981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.239011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.239198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.239228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.239458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.239490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.239674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.239705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.239961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.239991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.240218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.240248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.240481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.240513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.240630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.240652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.240817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.240839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.240990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.241026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.241263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.241296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.241564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.241595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.241745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.241776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.241904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.241936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.242164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.242195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.242330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.242350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.242542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.242573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.242802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.242833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.243043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.243062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.243301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.243333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.243478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.243509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.243635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.243667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.243805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.243836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.244054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.244298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.244330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.244528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.244548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.244813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.244844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.245034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.245054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.245235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.245273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.245575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.245606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.245747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.245778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.246015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.246046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.246312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.246357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.246538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.246558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.246726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.246757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.246994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.247026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.247147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.247166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.247325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.247345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.247469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.011 [2024-07-15 11:45:28.247644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.011 [2024-07-15 11:45:28.247675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.011 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.247868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.247899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.248178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.248210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.248501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.248533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.248844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.248875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.249162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.249192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.249393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.249425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.249654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.249684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.249971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.250082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.250104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.250288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.250319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.250520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.250552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.250804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.250955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.250986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.251105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.251136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.251283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.251315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.251467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.251498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.251756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.251786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.251978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.252009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.252213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.252244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.252470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.252501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.252784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.252815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.253038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.253069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.253325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.253346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.253576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.253595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.253827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.253845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.254040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.254059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.254294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.254314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.254477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.254497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.254616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.254647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.254765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.254796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.255023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.255055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.255185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.255204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.255445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.255477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.255613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.255645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.255916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.012 [2024-07-15 11:45:28.255946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.012 qpair failed and we were unable to recover it. 00:29:54.012 [2024-07-15 11:45:28.256292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.256324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.256475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.256506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.256654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.256685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.256819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.256850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.257105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.257137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.257362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.257382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.257558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.257589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.257842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.257873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.258060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.258091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.258309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.258328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.258502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.258534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.258754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.258786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.259092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.259124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.259348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.259371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.259620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.259639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.259850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.259869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.260042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.260061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.260171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.260202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.260413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.260444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.260638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.260670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.260960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.260990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.261136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.261167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.261385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.261417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.261622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.261653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.261843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.261874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.262089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.262121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.262305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.262325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.262594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.262625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.262812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.262843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.263051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.263082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.263270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.263291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.263465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.263497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.263753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.263784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.263912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.263943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.264059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.264089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.264277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.264298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.264559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.264590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.264783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.264814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.265045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.265077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.265202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.014 [2024-07-15 11:45:28.265222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.014 qpair failed and we were unable to recover it. 00:29:54.014 [2024-07-15 11:45:28.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.265412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.265615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.265646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.265859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.265890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.266896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.266915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.267090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.267109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.267289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.267310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.267421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.267440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.267606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.267625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.267810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.267846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.268147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.268178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.268473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.268505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.268764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.268795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.269013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.269271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.269302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.269486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.269506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.269734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.269753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.269921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.269952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.270234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.270290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.270488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.270508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.270681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.270987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.271019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.271136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.271166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.271358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.271390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.271517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.271548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.271839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.271869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.272963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.272982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.273088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.273107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.273212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.273231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.273413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.273434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.273723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.273754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.015 qpair failed and we were unable to recover it. 00:29:54.015 [2024-07-15 11:45:28.274019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.015 [2024-07-15 11:45:28.274051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.274185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.274215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.274348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.274368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.274479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.274498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.274658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.274678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.274909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.274940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.275227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.275267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.275467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.275498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.275681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.275700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.275862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.275881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.275984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.276003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.276124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.276143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.276318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.276337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.276601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.276836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.276868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.277068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.277099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.277342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.277374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.277514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.277544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.277816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.277848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.278106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.278137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.278340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.278373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.278562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.278800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.278831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.279035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.279066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.279321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.279341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.279597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.279628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.279781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.280100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.280131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.280284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.280304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.280535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.280553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.280654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.280693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.280884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.280915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.281226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.281263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.281525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.281556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.281688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.281720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.281994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.282025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.282225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.282262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.282448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.282468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.282745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.282776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.283087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.283118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.283318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.016 [2024-07-15 11:45:28.283360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.016 qpair failed and we were unable to recover it. 00:29:54.016 [2024-07-15 11:45:28.283489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.283508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.283698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.283729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.283935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.283965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.284154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.284184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.284385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.284405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.284666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.284696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.284914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.284944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.285890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.285909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.286145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.286164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.286349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.286369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.286535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.286554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.286725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.286755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.286945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.286976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.287240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.287278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.287467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.287486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.287610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.287647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.287846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.287877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.288060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.288091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.288299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.288319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.288519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.288538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.288741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.288772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.288936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.288967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.289245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.289283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.289591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.289610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.289811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.289831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.290086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.290105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.290280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.290300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.290403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.290422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.290649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.290668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.290896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.290915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.291113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.291133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.291291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.291310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.291474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.291494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.291690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.291709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.291978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.292014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.292301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.017 [2024-07-15 11:45:28.292334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.017 qpair failed and we were unable to recover it. 00:29:54.017 [2024-07-15 11:45:28.292537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.292569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.292879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.292910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.293894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.293913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.294174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.294205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.294484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.294516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.294705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.294736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.294966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.294997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.295130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.295150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.295388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.295408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.295571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.295590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.295848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.295879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.296072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.296103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.296413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.296445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.296652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.296696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.296873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.297044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.297075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.297362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.297394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.297591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.297610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.297861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.297880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.298130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.298149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.298407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.298615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.298635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.298892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.298929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.299151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.018 [2024-07-15 11:45:28.299182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.018 qpair failed and we were unable to recover it. 00:29:54.018 [2024-07-15 11:45:28.299391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.299424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.299571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.299751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.299771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.299946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.299965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.300057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.300077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.300372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.300405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.300667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.300905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.300936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.301130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.301162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.301463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.301485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.301650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.301669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.301925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.301955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.302165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.302197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.302499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.302519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.302690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.302709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.302815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.302846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.303033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.303064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.303379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.303411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.303599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.303629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.303775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.303806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.303927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.303958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.304215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.304245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.304399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.304419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.304601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.304632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.304916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.304947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.305209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.305239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.305396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.305416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.305692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.305722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.305953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.305984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.306210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.306240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.306531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.306562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.306690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.306710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.306902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.306933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.307210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.307241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.307478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.307510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.307695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.307725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.308014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.308046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.308298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.308318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.308477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.308497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.308735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.019 [2024-07-15 11:45:28.308766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.019 qpair failed and we were unable to recover it. 00:29:54.019 [2024-07-15 11:45:28.309001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.309032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.309175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.309206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.309493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.309513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.309626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.309646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.309815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.309834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.310038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.310069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.310279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.310311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.310503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.310533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.310818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.310849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.311056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.311092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.311224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.311243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.311433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.311464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.311732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.311762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.312019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.312050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.312371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.312391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.312588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.312618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.312815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.312846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.313037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.313068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.313324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.313356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.313627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.313646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.313753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.313772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.314932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.314963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.315195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.315225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.315473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.315506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.315693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.315723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.315830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.315861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.316075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.316105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.316362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.316393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.316659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.316691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.316830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.316861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.317000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.317030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.317324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.317357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.317547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.317579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.317836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.317866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.318052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.318083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.020 [2024-07-15 11:45:28.318278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.020 [2024-07-15 11:45:28.318298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.020 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.318475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.318495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.318691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.318723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.318839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.318870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.319121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.319153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.319438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.319470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.319663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.319694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.319922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.319953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.320159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.320190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.320379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.320416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.320624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.320644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.320819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.320850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.321075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.321106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.321311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.321344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.321627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.321659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.321875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.321907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.322201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.322242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.322432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.322452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.322583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.322785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.322816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.322948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.322979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.323174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.323205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.323414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.323446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.323650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.323693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.323798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.323911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.323930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.324179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.324214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.324436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.324468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.324674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.324705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.324861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.324892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.325078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.325109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.325386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.325406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.325609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.325627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.325810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.325829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.325944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.325976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.326115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.326146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.326378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.326411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.326604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.326624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.326800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.327066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.327097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.327376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.327416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.021 qpair failed and we were unable to recover it. 00:29:54.021 [2024-07-15 11:45:28.327678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.021 [2024-07-15 11:45:28.327718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.327867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.327898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.328125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.328157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.328335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.328367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.328626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.328656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.328838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.328857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.329095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.329126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.329364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.329397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.329595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.329620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.329797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.329817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.329944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.329975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.330172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.330203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.330422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.330453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.330673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.330692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.330857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.330876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.331146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.331177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.331413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.331445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.331643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.331675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.331983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.332014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.332292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.332324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.332582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.332826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.332857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.333060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.333091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.333371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.333402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.333687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.333718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.333839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.333870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.334084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.334321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.334353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.334559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.334590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.334819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.334849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.335040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.335071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.335344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.335364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.335536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.335555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.335731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.335750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.022 [2024-07-15 11:45:28.336007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.022 [2024-07-15 11:45:28.336038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.022 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.336234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.336273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.336499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.336530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.336676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.336707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.336894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.336925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.337126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.337157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.337379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.337400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.337630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.337649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.337811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.337842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.338960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.338996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.339290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.339323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.339534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.339565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.339698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.339730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.340010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.340041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.340265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.340297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.340585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.340605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.340788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.340807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.340971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.340991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.341168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.341200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.341394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.341414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.341593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.341624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.341881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.341912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.342169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.342200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.342474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.342643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.342674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.342936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.342967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.343970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.343990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.344167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.344198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.344469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.344501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.344702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.344722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.344823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.023 [2024-07-15 11:45:28.344843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.023 qpair failed and we were unable to recover it. 00:29:54.023 [2024-07-15 11:45:28.344962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.344983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.345167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.345186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.345288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.345308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.345507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.345527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.345678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.345696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.345908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.345927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.346001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.346020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.346264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.346284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.346542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.346561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.346731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.346750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.346855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.346886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.347037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.347068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.347283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.347315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.347460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.347496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.347766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.347797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.348077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.348108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.348312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.348345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.348548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.348567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.348820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.348839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.349014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.349033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.349310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.349343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.349630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.349663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.349947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.349967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.350273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.350304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.350588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.350618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.350819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.350850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.351130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.351161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.351387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.351419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.351704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.351737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.352028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.352059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.352250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.352310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.352616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.352656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.352892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.352924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.353115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.353146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.353429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.353449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.353624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.353837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.353869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.354152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.354184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.354476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.354509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.354787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.354824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.024 [2024-07-15 11:45:28.354883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7e60 (9): Bad file descriptor 00:29:54.024 [2024-07-15 11:45:28.355351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.024 [2024-07-15 11:45:28.355422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.024 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.355731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.355767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.355989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.356022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.356329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.356363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.356634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.356666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.356934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.356968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.357277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.357310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.357549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.357583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.357899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.357930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.358139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.358171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.358361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.358396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.358609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.358641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.358850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.358881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.359161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.359193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.359407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.359441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.359733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.359768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.360031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.360062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.360212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.360243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.360561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.360593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.360872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.360903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.361196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.361228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.361511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.361544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.361753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.361786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.361976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.362008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.362212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.362245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.362580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.362613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.362763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.362787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.363050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.363070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.363353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.363373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.363551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.363570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.363835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.363866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.364067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.364099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.364310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.364343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.364552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.364572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.364781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.364812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.365136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.365167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.365374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.365407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.365604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.365623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.365807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.365826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.366085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.366115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.025 [2024-07-15 11:45:28.366270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.025 [2024-07-15 11:45:28.366291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.025 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.366474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.366494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.366674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.366693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.366903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.366934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.367268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.367301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.367554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.367573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.367677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.367697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.367932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.367951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.368186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.368206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.368382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.368403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.368638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.368669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.368928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.368959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.369274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.369318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.369566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.369586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.369706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.369725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.369969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.369989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.370253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.370294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.370527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.370562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.370848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.370879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.371174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.371206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.371404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.371436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.371589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.371620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.371812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.371843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.372077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.372108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.372418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.372451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.372687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.372718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.372934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.372971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.373270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.373302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.373574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.373604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.373809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.373841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.374048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.374079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.374273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.374306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.374558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.374580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.374871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.374903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.375163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.375194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.375396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.375416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.375701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.375720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.375967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.375986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.026 qpair failed and we were unable to recover it. 00:29:54.026 [2024-07-15 11:45:28.376159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.026 [2024-07-15 11:45:28.376179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.376481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.376501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.376686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.376706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.376964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.376995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.377252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.377294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.377510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.377541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.377800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.377831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.378140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.378172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.378374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.378406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.378597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.378628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.378890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.378922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.379113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.379144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.379430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.379462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.379750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.379785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.380080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.380111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.380392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.380424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.380713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.380745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.380999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.381030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.381342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.381374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.381644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.381675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.381937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.381956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.382131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.382151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.382335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.382367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.382654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.382685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.382978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.383010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.383301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.383333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.383522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.383553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.383821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.383852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.384139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.384175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.384422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.384455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.384661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.384705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.384891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.384911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.385157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.385177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.385346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.385377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.385704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.385724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.385961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.385981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.386153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.386172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.386351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.027 [2024-07-15 11:45:28.386371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-07-15 11:45:28.386550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.386890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.386920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.387111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.387143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.387442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.387491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.387784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.388125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.388157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.388430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.388463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.388690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.388721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.388863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.388895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.389190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.389221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.389505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.389537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.389828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.389860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.390150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.390181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.390441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.390473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.390756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.390787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.391047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.391078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.391297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.391330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.391574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.391606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.391815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.391836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.392046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.392065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.392329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.392614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.392653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.392801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.392832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.393136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.393168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.393445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.393478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.393714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.393745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.393934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.393954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.394189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.394209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.394473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.394493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.394734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.394753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.395007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.395043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.395233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.395273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.395483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.395515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.395807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.395839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.396039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.396070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.396358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.396392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.396524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.396555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.396762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.396793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.397076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.397108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.397401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.397446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-07-15 11:45:28.397713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.028 [2024-07-15 11:45:28.397755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.397951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.397983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.398242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.398296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.398499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.398519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.398786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.398828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.399096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.399127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.399415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.399435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.399545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.399564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.399804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.399824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.400002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.400022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.400286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.400319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.400578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.400608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.400765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.400796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.401110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.401130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.401385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.401423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.401628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.401659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.401921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.401952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.402234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.402274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.402481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.402513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.402799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.402831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.403128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.403160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.403448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.403481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.403775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.403808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.404093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.404125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.404319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.404352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.404640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.404671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.404875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.404906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.405052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.405083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.405395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.405427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.405696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.405728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.406041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.406077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.406284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.406317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.406509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.406542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.406734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.406764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.407066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.407106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.407292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.407313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.407477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.407497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.407689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.407721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.408009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.408040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.408301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.408334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.408642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.408672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-07-15 11:45:28.408872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.029 [2024-07-15 11:45:28.408904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.409188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.409219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.409513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.409546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.409835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.409867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.410059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.410090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.410362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.410396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.410599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.410632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.410750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.410782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.411019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.411050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.411325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.411357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.411648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.411680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.411997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.412029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.412236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.412263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.412528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.412547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.412780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.412799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.413091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.413111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.413231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.413251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.413470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.413503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.413762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.413793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.414104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.414124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.414248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.414275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.414468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.414499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.414770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.414801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.415062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.415094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.415384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.415418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.415733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.415764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.416075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.416107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.416382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.416415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.416710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.416741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.417002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.417039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.417340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.417372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.417611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.417642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.417783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.417815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.418031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.418051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.418217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.418236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.418525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.418558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.418821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.418852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.419164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.419195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.419408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.419441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.419636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.419669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.419812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.419857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.030 qpair failed and we were unable to recover it. 00:29:54.030 [2024-07-15 11:45:28.420053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.030 [2024-07-15 11:45:28.420072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.420307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.420328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.420566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.420586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.420847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.420888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.421198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.421229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.421517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.421551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.421837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.421870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.422163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.422194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.422488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.422521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.422713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.422745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.423034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.423054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.423354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.423386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.423621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.423651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.423952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.423984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.424201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.424233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.424504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.424618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.424637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.424902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.424934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.425250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.425293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.425566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.425598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.425897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.425917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.426215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.426246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.426532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.426564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.426844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.426876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.427095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.427127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.427338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.427371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.427657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.427676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.427927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.427948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.428146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.428287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.428308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.428500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.428520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.428631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.428651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.428837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.428856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.429034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.429066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.031 [2024-07-15 11:45:28.429279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-15 11:45:28.429313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.031 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.429602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.429635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.429930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.429962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.430248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.430288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.430558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.430590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.430857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.430889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.431167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.431200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.431507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.431540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.431690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.431723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.431984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.432016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.432302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.432323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.432525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.432545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.432712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.432732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.432972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.433003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.433322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.433355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.433649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.433681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.433970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.434001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.434142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.434173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.434463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.434496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.434737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.434769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.435103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.435372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.435406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.032 [2024-07-15 11:45:28.435685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.032 [2024-07-15 11:45:28.435717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.032 qpair failed and we were unable to recover it. 00:29:54.333 [2024-07-15 11:45:28.435976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.333 [2024-07-15 11:45:28.436009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.333 qpair failed and we were unable to recover it. 00:29:54.333 [2024-07-15 11:45:28.436207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.333 [2024-07-15 11:45:28.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.333 qpair failed and we were unable to recover it. 00:29:54.333 [2024-07-15 11:45:28.436573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.333 [2024-07-15 11:45:28.436607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.333 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.436875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.436907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.437149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.437181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.437391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.437425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.437716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.437747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.437940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.437972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.438117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.438148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.438411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.438445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.438666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.438699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.438978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.439015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.439214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.439247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.439552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.439584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.439864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.439897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.440090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.440122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.440309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.440330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.440508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.440530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.440756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.440787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.441074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.441094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.441404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.441437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.441722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.441754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.442050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.442351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.442395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.334 [2024-07-15 11:45:28.442699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.334 [2024-07-15 11:45:28.442731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.334 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.443036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.443068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.443345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.443378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.443670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.443701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.443921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.443953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.444248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.444289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.444571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.444602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.444915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.444935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.445194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.445225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.445571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.445604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.445967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.446000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.446147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.446179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.446394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.446427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.446720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.446751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.447045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.447077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.447329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.335 [2024-07-15 11:45:28.447362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.335 qpair failed and we were unable to recover it. 00:29:54.335 [2024-07-15 11:45:28.447684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.447717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.448003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.448035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.448320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.448353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.448654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.448685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.448898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.448930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.449171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.449204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.449428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.449462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.449743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.449775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.450077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.450097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.450389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.450410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.450595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.450615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.450782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.450805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.451088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.451119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.451416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.451450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.451740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.451772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.452053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.452084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.452382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.336 [2024-07-15 11:45:28.452416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.336 qpair failed and we were unable to recover it. 00:29:54.336 [2024-07-15 11:45:28.452724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.452756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.453017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.453049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.453294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.453326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.453624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.453657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.453928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.453948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.454201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.454221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.454477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.454498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.454805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.454837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.455112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.455144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.455375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.455632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.455653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.455842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.455863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.456101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.456122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.456420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.456441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.456747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.456779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.457009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.457041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.457349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.457383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.457701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.457734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.457980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.458012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.458288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.458321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.458616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.458648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.458953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.458975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.459279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.459312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.459608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.459640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.459932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.459963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.460163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.460195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.460402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.460435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.460726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.460747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.461051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.461083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.461281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.461317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.461636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.461669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.461921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.461953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.462247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.462289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.462572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.462819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.462857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.463137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.463169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.463493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.463526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.463733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.463774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.464044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.464065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.464372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.464405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.464718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.464750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.464960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.337 [2024-07-15 11:45:28.464991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.337 qpair failed and we were unable to recover it. 00:29:54.337 [2024-07-15 11:45:28.465219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.465250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.465604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.465637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.465782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.465816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.466089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.466110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.466391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.466424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.466751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.466783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.467078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.467111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.467306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.467340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.467575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.467607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.467845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.468153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.468174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.468468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.468807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.468854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.468979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.469012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.469152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.469462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.469496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.469739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.469989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.470009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.470278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.470299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.470553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.470585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.470784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.470816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.471011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.471044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.471315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.471534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.471555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.471654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.471676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.471861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.471893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.472193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.472226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.472509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.472542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.472809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.472829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.473099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.473381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.473419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.473653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.473686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.338 qpair failed and we were unable to recover it. 00:29:54.338 [2024-07-15 11:45:28.474013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.338 [2024-07-15 11:45:28.474046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.474315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.474350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.474569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.474589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.474771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.474791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.475054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.475086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.475356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.475389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.475663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.475708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.475822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.475844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.476117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.476138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.476409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.476431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.476552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.476760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.476781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.477004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.477025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.477309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.477343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.477651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.477684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.477894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.477926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.478157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.478178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.478365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.478579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.478599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.478782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.478803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.479046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.479066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.479287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.479308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.479597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.479629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.479931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.479964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.480230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.480272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.480575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.480609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.480902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.480923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.481218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.481265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.481558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.481592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.481832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.481865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.482139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.482160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.482448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.482482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.482728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.482760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.483074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.483108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.483309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.339 [2024-07-15 11:45:28.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.339 qpair failed and we were unable to recover it. 00:29:54.339 [2024-07-15 11:45:28.483546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.340 [2024-07-15 11:45:28.483579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.340 qpair failed and we were unable to recover it. 00:29:54.340 [2024-07-15 11:45:28.483802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.340 [2024-07-15 11:45:28.483834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.340 qpair failed and we were unable to recover it. 00:29:54.340 [2024-07-15 11:45:28.484065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.340 [2024-07-15 11:45:28.484096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.340 qpair failed and we were unable to recover it. 00:29:54.340 [2024-07-15 11:45:28.484398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.484433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.484716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.484750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.485020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.485052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.485368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.485402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.485603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.485636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.485931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.485952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.486220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.486534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.341 [2024-07-15 11:45:28.486556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.341 qpair failed and we were unable to recover it. 00:29:54.341 [2024-07-15 11:45:28.486750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.486790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.486990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.487023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.487347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.487382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.487673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.487705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.488000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.488033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.488328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.488362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.488655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.488688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.488982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.489015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.489307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.342 [2024-07-15 11:45:28.489341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.342 qpair failed and we were unable to recover it. 00:29:54.342 [2024-07-15 11:45:28.489552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.489584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.489881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.489902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.490015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.490036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.490248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.490277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.490453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.490474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.490724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.490757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.491085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.491117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.491344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.491366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.491640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.491662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.491915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.491935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.492128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.492149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.492336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.492358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.492464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.492488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.492767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.492800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.493151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.493183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.493484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.493519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.493829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.493862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.494101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.494134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.494457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.494490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.494736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.494769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.494934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.494981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.495176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.495196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.495443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.495464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.495640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.343 [2024-07-15 11:45:28.495672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.343 qpair failed and we were unable to recover it. 00:29:54.343 [2024-07-15 11:45:28.495889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.495922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.496210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.496231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.496362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.496384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.496680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.496712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.497026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.497058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.497358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.497392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.497708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.497740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.498014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.498047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.498331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.498364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.498578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.498611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.498877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.498899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.499012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.499032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.499335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.499357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.499553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.499573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.499766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.499787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.500083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.500104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.500368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.500391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.500665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.500698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.500929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.500962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.501107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.501140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.501435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.501722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.501754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.501982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.502004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.502270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.502316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.502537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.344 [2024-07-15 11:45:28.502569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.344 qpair failed and we were unable to recover it. 00:29:54.344 [2024-07-15 11:45:28.502797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.502818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.503117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.503150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.503490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.503524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.503731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.503768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.504130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.504164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.504403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.504439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.504646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.504678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.504820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.504854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.505102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.505124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.505371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.505392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.505617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.505639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.505754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.505776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.506022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.506043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.506244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.506273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.506543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.506564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.506836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.506858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.507065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.507086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.507271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.507293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.507599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.507631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.507856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.507889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.508073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.508106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.508402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.508436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.508762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.509054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.509086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.509290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.509324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.509530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.509563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.509817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.509850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.510045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.510066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.510339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.510361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.510548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.510569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.510798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.510818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.511012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.511034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.511316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.511351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.511555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.511589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.511901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.511934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.512146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.512179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.512534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.512569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.512793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.512825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.513197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.513230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.513565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.513599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.513848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.513881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.514184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.514217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.345 qpair failed and we were unable to recover it. 00:29:54.345 [2024-07-15 11:45:28.514533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.345 [2024-07-15 11:45:28.514567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.514796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.514836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.515119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.515153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.515351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.515386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.515592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.515625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.515957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.515990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.516289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.516324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.516551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.516584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.516809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.516831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.517104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.517125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.517334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.517356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.517622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.517644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.517799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.517821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.517992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.518029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.518239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.518282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.518564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.518598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.518914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.518948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.519251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.519280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.519478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.519499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.519774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.519807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.520016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.520049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.520326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.520360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.520635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.520678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.520984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.521017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.521280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.521315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.521623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.521921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.521955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.522268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.522303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.522611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.522644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.522776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.522798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.522925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.522946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.523240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.523267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.523493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.523525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.523760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.523794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.524090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.524123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.524327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.524361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.524613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.524646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.524973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.525006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.525308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.525331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.525630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.525663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.525830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.525864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.526117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.526155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.526430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.526464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.526770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.526803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.527061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.527094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.527372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.527406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.527614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.527647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.527791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.527813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.527997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.528330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.528352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.346 [2024-07-15 11:45:28.528634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.346 [2024-07-15 11:45:28.528655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.346 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.528849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.528870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.529115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.529138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.529339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.529361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.529632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.529653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.529906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.529928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.530202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.530249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.530492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.530525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.530799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.530832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.531030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.531051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.531240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.531269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.531565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.531598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.531799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.531821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.532056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.532353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.532482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.532618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.532777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.532992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.533025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.533228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.533272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.533479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.533512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.533718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.533750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.533969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.534203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.534237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.534445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.534479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.534620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.534652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.534925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.534957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.535161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.535193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.535418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.535452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.535614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.535647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.535965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.535998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.536220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.536273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.536608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.536640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.536874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.536906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.537027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.537049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.537231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.537277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.537575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.537607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.537755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.537788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.538061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.538093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.538303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.538338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.538629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.538663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.538886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.538918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2971383 Killed "${NVMF_APP[@]}" "$@" 00:29:54.347 [2024-07-15 11:45:28.539044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.539331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.539483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.539589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:54.347 [2024-07-15 11:45:28.539735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.539942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.539964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:54.347 [2024-07-15 11:45:28.540185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.540206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:54.347 [2024-07-15 11:45:28.540479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.540502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:54.347 [2024-07-15 11:45:28.540629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.540651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.540828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.540851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.347 [2024-07-15 11:45:28.541043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.541267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.541512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.541639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.541762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.541915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.541936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.542214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.542235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.542448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.542469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.347 [2024-07-15 11:45:28.542651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.347 [2024-07-15 11:45:28.542672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.347 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.542846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.542867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.543138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.543160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.543268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.543290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.543484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.543505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.543720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.543741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.543834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.543855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.544062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.544083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.544206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.544391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.544428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.544547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.544569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.544816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.544837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.545010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.545154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.545176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.545369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.545391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.545651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.545672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.545847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.545868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.546117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.546138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.546405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.546427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.546625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.546646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.546822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.546843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.547088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.547109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.547297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.547323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.547443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.547464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.547732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.547753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.547925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2972208 00:29:54.348 [2024-07-15 11:45:28.547947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.548158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.548179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2972208 00:29:54.348 [2024-07-15 11:45:28.548279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.548303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.548429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.548450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:54.348 [2024-07-15 11:45:28.548647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2972208 ']' 00:29:54.348 [2024-07-15 11:45:28.548669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.548934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.348 [2024-07-15 11:45:28.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.549162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.549185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.348 [2024-07-15 11:45:28.549313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.549335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.549516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.348 [2024-07-15 11:45:28.549538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.549734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.549756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.348 [2024-07-15 11:45:28.549930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.549952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 11:45:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.348 [2024-07-15 11:45:28.550146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.550168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.550281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.550303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.550480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.550501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.550694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.550833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.550854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.551039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.551059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.551277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.551299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.551546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.551568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.551671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.551696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.551824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.551844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.552935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.553055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.553075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.553247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.553276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.553480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.553501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.553615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.553635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.553872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.553893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.554012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.554032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.554316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.554338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.554622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.554643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.554911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.554932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.555100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.555121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.555440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.555462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.555638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.555658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.555875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.555896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.556323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.556537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.556733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.556890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.556995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.557016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.557271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.557292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.557481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.557502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.557776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.557797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.348 qpair failed and we were unable to recover it. 00:29:54.348 [2024-07-15 11:45:28.557894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.348 [2024-07-15 11:45:28.557914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.558021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.558042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.558285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.558307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.558554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.558574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.558687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.558707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.558818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.558838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.559001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.559021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.559208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.559228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.559495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.559517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.559731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.559752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.559926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.559950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.560223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.560244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.560436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.560457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.560654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.560675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.560925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.560945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.561064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.561085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.561208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.561229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.561497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.561621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.561642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.561814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.561835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.562178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.562198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.562315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.562336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.562530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.562550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.562794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.562814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.563003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.563024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.563365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.563386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.563499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.563519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.563690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.563711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.563830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.563850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.564113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.564133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.564262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.564283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.564574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.564595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.564894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.564915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.565089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.565110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.565350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.565372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.565556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.565576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.565767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.565787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.566006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.566027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.566231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.566252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.566524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.566546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.566718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.566738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.566928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.566948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.567138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.567159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.567408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.567430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.567725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.567746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.568013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.568034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.568135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.568155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.568412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.568433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.568708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.568727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.568908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.568929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.569101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.569124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.569315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.569336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.569511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.569532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.569775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.569796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.570018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.570039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.570225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.570246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.570427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.570448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.570574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.570595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.570863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.570884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.571181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.571202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.571401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.571423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.571512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.571534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.571775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.571795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.572091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.572110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.572432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.572454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.572725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.572745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.573028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.573049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.573246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.573273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.573542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.573562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.573850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.573870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.349 qpair failed and we were unable to recover it. 00:29:54.349 [2024-07-15 11:45:28.574118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.349 [2024-07-15 11:45:28.574139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.574339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.574360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.574632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.574653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.574912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.574932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.575134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.575155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.575430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.575451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.575620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.575641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.575838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.575858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.576137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.576157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.576352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.576373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.576501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.576522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.576689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.576710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.576966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.576987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.577185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.577205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.577378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.577399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.577582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.577603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.577776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.577797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.578063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.578083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.578372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.578394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.578667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.578689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.578883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.578906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.579127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.579149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.579341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.579363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.579606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.579628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.579871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.579892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.580072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.580092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.580396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.580417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.580610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.580929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.580950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.581249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.581278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.581457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.581478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.581722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.581742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.582040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.582061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.582269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.582290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.582571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.582592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.582791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.582812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.583061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.583081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.583336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.583357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.583471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.583491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.583688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.583709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.583902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.583923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.584200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.584221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.584443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.584464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.584674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.584695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.585053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.585075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.585351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.585372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.585650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.585670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.585881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.585904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.586170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.586191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.586418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.586439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.586574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.586595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.586815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.586835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.587084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.587106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.587383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.587632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.587652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.587848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.587982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.588003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.588231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.588252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.588475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.588497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.588773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.588793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.588964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.588986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.589238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.589266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.589571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.589592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.589786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.589808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.590002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.590024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.590225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.590246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.590502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.590523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.590741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.590762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.591031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.591051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.591171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.591192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.591385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.591407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.591676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.591697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.591921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.591943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.592193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.592214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.592507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.592528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.592775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.592796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.592996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.593017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.593266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.350 [2024-07-15 11:45:28.593287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.350 qpair failed and we were unable to recover it. 00:29:54.350 [2024-07-15 11:45:28.593540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.593560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.593742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.593763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.593887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.593908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.594197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.594218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.594431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.594453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.594552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.594573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.594826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.594846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.595101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.595122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.595397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.595418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.595672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.595699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.595871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.595891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.596193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.596214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.596491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.596512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.596850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.597036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.597056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.597175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.597196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.597469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.597489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.597735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.597755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.598020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.598040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.598310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.598332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.598537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.598559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.598834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.598854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.599111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.599131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.599400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.599424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.599668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.599688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.599928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.599948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.600218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.600238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.600440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.600461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.600643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.600663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.600832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.600852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.601121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.601141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.601358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.601381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.601646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.601666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.601923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.601943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.602216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.602238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.602471] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:29:54.351 [2024-07-15 11:45:28.602499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.602520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 [2024-07-15 11:45:28.602529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.602797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.602817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.603010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.603027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.603287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.603308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.603607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.603628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.603796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.603816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.604074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.604094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.604281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.604304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.604490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.604511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.604784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.604807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.605010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.605032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.605208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.605228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.605520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.605541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.605669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.605999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.606020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.606207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.606227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.606436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.606457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.606650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.606671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.606851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.606871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.607121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.607142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.607399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.607421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.607668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.607688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.607982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.608003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.608273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.608483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.608503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.608773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.608794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.609080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.609105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.609325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.609347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.609590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.609611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.609796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.609816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.610033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.610054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.610322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.610343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.351 qpair failed and we were unable to recover it. 00:29:54.351 [2024-07-15 11:45:28.610535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.351 [2024-07-15 11:45:28.610555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.610737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.610758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.610943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.610964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.611203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.611223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.611408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.611429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.611613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.611634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.611736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.611756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.611947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.611967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.612247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.612277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.612472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.612494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.612773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.612793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.613006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.613027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.613289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.613310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.613581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.613602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.613846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.613866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.614139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.614176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.614415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.614436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.614615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.614635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.614901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.614922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.615222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.615242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.615575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.615596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.615744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.615764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.615933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.615953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.616120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.616140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.616246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.616274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.616440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.616460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.616730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.616750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.616945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.616965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.617147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.617167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.617444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.617465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.617677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.617698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.617830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.617851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.618929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.618949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.619137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.619158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.619325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.619345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.619527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.619546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.619723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.619743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.619846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.619867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.620159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.620179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.620352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.620374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.620646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.620666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.620884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.620904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.621086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.621106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.621302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.621324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.621595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.621615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.621738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.622024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.622044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.622240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.622268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.622368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.622389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.622563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.622582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.622845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.622865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.623132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.623152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.623410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.623431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.623618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.623637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.623833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.623853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.624001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.624020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.624280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.624301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.624574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.624595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.624786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.624812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.625085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.625105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.625398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.625420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.625713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.625733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.625861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.625880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.626132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.626154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.626417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.626437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.626706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.626726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.626911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.626930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.627198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.627218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.627477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.627498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.627764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.627788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.628057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.628077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.628339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.628360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.628599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.352 [2024-07-15 11:45:28.628620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.352 qpair failed and we were unable to recover it. 00:29:54.352 [2024-07-15 11:45:28.628855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.628875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.629142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.629161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.629370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.629392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.629574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.629594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.629819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.629839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.630038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.630057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.630269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.630290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.630493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.630513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.630712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.630732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.630916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.630937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.631114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.631134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.631318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.631340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.631559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.631868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.631889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.632201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.632221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.632497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.632518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.632788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.632808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.633071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.633092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.633270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.633291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.633590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.633609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.633852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.633872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.634051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.634071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.634323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.634344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.634593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.634615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.634801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.634822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.635024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.635044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.635315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.635337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.635573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.635593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.635833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.635853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.636151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.636171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.636486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.636507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.636758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.636778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.637040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.637060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.637275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.637296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.637489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.637509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.637686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.637706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.637816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.637840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.638117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.638137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.638394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.638415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.638679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.638700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.638944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.638965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.639199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.639219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.639487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.639508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.639770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.639790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.639915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.639935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.640130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.640151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.640395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.640416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.640595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.640616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.640890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.640912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.641077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.641097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.641225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.641246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.641386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.641405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.641666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.641686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.353 [2024-07-15 11:45:28.641867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.641888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.642208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.642229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.642407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.642428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.642627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.642647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.642776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.642796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.642982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.643002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.643292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.643314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.643500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.643520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.643782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.643803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.644068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.644088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.644360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.644587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.644607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.644791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.644810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.645100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.645120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.645300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.645320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.645526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.645547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.645723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.645743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.645940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.645960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.646169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.646191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.646322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.646344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.646526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.646546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.646810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.646831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.647010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.647030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.647204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.353 [2024-07-15 11:45:28.647227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.353 qpair failed and we were unable to recover it. 00:29:54.353 [2024-07-15 11:45:28.647526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.647547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.647810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.647829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.648073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.648093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.648334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.648355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.648595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.648615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.648782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.648802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.649013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.649033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.649224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.649244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.649424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.649445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.649685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.649705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.649828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.649847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.650078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.650098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.650364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.650384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.650557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.650577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.650865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.650885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.651053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.651073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.651342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.651363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.651653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.651673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.651920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.651940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.652141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.652162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.652430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.652451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.652743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.653025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.653045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.653294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.653315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.653550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.653740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.653760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.653977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.653997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.654196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.654216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.654419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.654439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.654706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.654726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.654854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.654874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.655012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.655033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.655198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.655218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.655540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.655561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.655789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.655811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.655992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.656012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.656287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.656310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.656487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.656508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.656669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.656689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.656821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.656844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.657132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.657152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.657315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.657336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.657568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.657587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.657791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.657811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.657988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.658008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.658219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.658239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.658482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.658502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.658700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.658720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.658906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.658926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.659190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.659210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.659343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.659363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.659547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.659568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.659803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.659823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.660005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.660026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.660224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.660243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.660461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.660482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.660655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.660677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.660846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.660866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.661108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.661128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.661336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.661357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.661474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.661494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.661674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.661694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.661887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.661906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.662117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.662139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.662407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.662428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.662637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.662659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.662847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.662868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.663104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.663123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.663323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.663344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.663452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.663472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.354 qpair failed and we were unable to recover it. 00:29:54.354 [2024-07-15 11:45:28.663589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.354 [2024-07-15 11:45:28.663609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.663844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.663863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.664120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.664139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.664322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.664343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.664555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.664575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.664741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.664780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.664949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.664969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.665245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.665273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.665409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.665429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.665590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.665627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.665864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.665884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.666062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.666082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.666282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.666302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.666538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.666558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.666681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.666701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.666908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.667204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.667508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.667528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.667759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.667779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.668023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.668043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.668303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.668324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.668583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.668603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.668767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.668786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.668919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.668938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.669192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.669212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.669323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.669344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.669462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.669482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.669736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.669756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.670060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.670080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.670366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.670386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.670516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.670536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.670790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.670810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.671047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.671067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.671368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.671388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.671586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.671606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.671770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.671791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.672057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.672078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.672286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.672307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.672483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.672503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.672763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.672783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.673029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.673049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.673339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.673360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.673649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.673670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.673966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.673985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.674232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.674251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.674442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.674462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.674721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.674740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.675028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.675280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.675301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.675589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.675612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.675795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.675816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.676046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.676065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.676297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.676426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.676446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.676689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.676708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.676957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.677144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.677164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.677332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.677353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.677606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.677626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.677788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.677807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.678052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.678072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.678252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.678279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.678404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.678424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.678681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.678701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.678922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.678942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.679270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.679292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.679471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.679490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.679771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.679791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.679973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.679992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.680186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.680207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.680455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.680476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.680642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.680661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.680843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.680863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.681126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.681146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.681348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.681368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.681539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.681559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.681762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.355 [2024-07-15 11:45:28.681782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.355 qpair failed and we were unable to recover it. 00:29:54.355 [2024-07-15 11:45:28.682013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.682034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.682231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.682250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.682461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.682481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.682686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.682706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.682970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.683221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.683241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.683501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.683522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.683651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.683670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.683864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.683884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.683984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.684004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.684278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.684299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.684526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.684546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.684830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.684853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.685060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.685080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.685266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.685286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.685470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.685489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.685686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.685705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.685874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.685894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.686159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.686179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.686352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.686372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.686631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.686650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.686922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.686942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.687063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.687083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.687275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.687296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.687418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.687437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.687600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.687619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.687853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.687873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.688105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.688125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.688331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.688351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.688584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.688604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.688820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.688840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.689123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.689143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.689311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.689561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.689580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.689763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.689783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.690099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.690118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.690355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.690374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.690556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.690576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.690830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.690850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.691206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.691301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.691590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.691626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.691829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.691861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.692123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.692155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.692364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.692396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.692554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.692585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.692842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.692873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.693211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.693243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.693490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.693521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.693735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.693757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.693943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.693963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.694073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.694092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.694275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.694297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.694536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.694559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.694741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.694761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.695005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.695284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.695304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.695488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.695508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.695710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.695730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.696061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.696081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.696200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.696220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.696403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.696423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.696603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.696623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.696859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.696879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.697130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.697150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.697377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.697398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.697563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.697584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.697766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.697786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.698042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.698061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.698343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.698364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.698485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.356 [2024-07-15 11:45:28.698505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.356 qpair failed and we were unable to recover it. 00:29:54.356 [2024-07-15 11:45:28.698764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.698783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.698973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.698993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.699275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.699295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.699504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.699524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.699726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.699745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.699957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.699976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.700186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.700206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.700484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.700684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.700704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.700869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.700889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.701005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.701024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.701204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.701224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.701479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.701500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.701675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.701694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.702019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.702039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.702303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.702324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.702584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.702604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.702813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.702832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.702930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.702950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.703216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.703236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.703451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.703470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.703648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.703668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.703925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.703948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.704202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.704222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.704520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.704541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.704716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.704735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.704845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.704864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.705921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.705941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.706175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.706195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.706392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.706413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.706593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.706612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.706781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.706801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.706915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.706934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.707190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.707210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.707421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.707441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.707620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.707639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.707871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.707891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.708198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.708217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.708380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.708659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.708678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.708874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.708892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.709970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.709990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.710250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.710279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.710463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.710482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.710714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.710733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.710958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.710977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.711150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.711170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.711337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.711358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.711450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.711471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.711661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.711681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.711852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.711871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.712115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.712135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.712330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.712353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.712477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.712497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.712610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.712629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.712761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.712781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.713005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.713025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.713185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.713204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.713496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.713517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.713679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.713698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.713988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.714008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.714247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.714274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.714472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.714491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.714612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.714632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.714804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.714823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.357 [2024-07-15 11:45:28.715023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.357 [2024-07-15 11:45:28.715043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.357 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.715277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.715298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.715510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.715529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.715647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.715667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.715881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.715900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.716008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.716028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.716300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.716321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.716456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.716476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.716581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.716847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.717028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.717047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.717220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.717240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.717486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.717506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.717639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.717659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.717858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.717880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.718175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.718194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.718442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.718463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.718745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.718765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.718973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.718992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.719162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.719181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.719393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.719414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.719538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.719558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.719767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.719787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.719896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.719916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.720126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.720146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.720316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.720335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.720507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.720527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.720708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.720727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.720861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.720881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.721140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.721160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.721419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.721440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.721604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.721623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.721754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.721773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.721892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.721911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.722174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.722194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.722301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.722321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.722433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.722454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.722665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.722685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.722932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.722952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.723201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.723221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.723342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.723362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.723623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.723643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.723767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.723786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.723974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.723994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.724227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.724246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.724466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.724486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.724653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.724674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.724766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.724785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.724854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.358 [2024-07-15 11:45:28.725046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.725067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.725320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.725341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.725514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.725534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.725722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.725742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.725914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.725934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.726172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.726192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.726379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.726399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.726524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.726543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.726733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.726754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.726852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.726871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.727141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.727161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.727358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.727379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.727538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.727558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.727716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.727736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.727943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.727963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.728137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.728336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.728356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.728519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.728539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.728740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.728760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.729017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.729040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.729215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.729236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.729405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.729426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.729606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.729626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.358 [2024-07-15 11:45:28.729883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.358 [2024-07-15 11:45:28.729903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.358 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.730082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.730102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.730283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.730304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.730518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.730538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.730790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.730810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.731940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.732141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.732161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.732413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.732434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.732600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.732619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.732821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.732841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.733118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.733138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.733395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.733416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.733699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.733961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.733981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.734234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.734262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.734385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.734405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.734581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.734601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.734842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.734863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.735150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.735170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.735296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.735317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.735485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.735505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.735687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.735707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.735936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.735956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.736224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.736244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.736540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.736560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.736682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.736702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.736865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.737092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.737113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.737236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.737263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.737391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.737411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.737642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.737663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.737933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.737960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.738221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.738241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.738431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.738451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.738585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.738604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.738722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.738741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.738917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.738937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.739110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.739130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.739380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.739401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.739511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.739530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.739713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.739732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.740025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.740045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.740289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.740309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.740556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.740575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.740753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.740773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.741084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.741104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.741217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.741236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.741428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.741447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.741548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.741567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.741780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.741800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.742076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.742096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.742267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.742287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.742516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.742536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.742700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.742720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.742911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.742931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.743163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.743182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.743475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.743496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.743621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.743642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.743911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.743932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.744126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.744356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.744375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.744555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.744575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.744761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.744781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.744992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.745012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.745244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.745270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.745441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.745461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.745672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.745692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.745950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.745968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.746132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.746152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.746353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.746374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.746624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.746644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.746807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.746830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.747014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.359 [2024-07-15 11:45:28.747034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.359 qpair failed and we were unable to recover it. 00:29:54.359 [2024-07-15 11:45:28.747221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.747518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.747538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.747670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.747689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.747949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.747968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.748160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.748180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.748412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.748433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.748606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.748626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.748788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.748808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.749055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.749074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.749311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.749331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.749521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.749540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.749712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.749732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.750039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.750059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.750311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.750332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.750431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.750450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.750733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.750752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.750996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.751016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.751215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.751234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.751504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.751525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.751751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.751772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.751964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.751983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.752104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.752124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.752239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.752265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.752525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.752545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.752773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.752792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.753085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.753104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.753298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.753319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.753547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.753566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.753761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.753780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.754049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.754069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.754305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.754325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.754459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.754479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.754648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.754668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.754869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.754888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.755163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.755183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.755290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.755309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.755563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.755582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.755845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.755864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.756152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.756175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.756435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.756455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.756686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.756706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.756887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.756907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.757076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.757096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.757317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.757338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.757630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.757649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.757858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.757877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.758142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.758162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.758427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.758448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.758635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.758655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.758915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.758935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.759178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.759465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.759771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.759791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.760101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.760121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.760297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.760317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.760431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.760450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.760743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.760764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.760987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.761006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.761216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.761235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.360 [2024-07-15 11:45:28.761499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.360 [2024-07-15 11:45:28.761520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.360 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.761635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.761655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.761782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.761802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.761987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.762008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.762222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.762243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.762360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.762380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.762549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.762568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.762798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.762818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.763121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.763140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.763397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.763417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.763543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.763562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.763670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.763690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.763936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.763956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.764144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.764163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.764375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.764396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.764496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.764516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.764728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.764747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.764984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.765003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.765115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.765134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.765315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.765339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.765543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.765563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.765818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.765838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.766025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.766044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.766305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.766326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.766615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.766634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.766923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.766942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.767129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.767149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.767328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.767349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.767448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.767468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.767640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.767660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.767843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.767863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.768042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.768063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.645 qpair failed and we were unable to recover it. 00:29:54.645 [2024-07-15 11:45:28.768318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.645 [2024-07-15 11:45:28.768338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.768523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.768544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.768718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.768738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.768992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.769011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.769202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.769221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.769415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.769435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.769634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.769653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.769906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.769926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.770188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.770208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.770335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.770356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.770623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.770643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.770924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.770943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.771198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.771218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.771396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.771539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.771559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.771790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.771809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.772083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.772102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.772359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.772380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.772551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.772570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.772688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.772707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.772907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.772926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.773169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.773189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.773418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.773439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.773701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.773720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.773934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.773954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.774216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.774236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.774478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.774498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.774604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.774627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.774786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.774805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.775061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.775080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.775414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.775435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.775548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.775567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.775798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.775818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.776067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.776086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.646 qpair failed and we were unable to recover it. 00:29:54.646 [2024-07-15 11:45:28.776341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.646 [2024-07-15 11:45:28.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.776550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.776570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.776801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.776821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.777083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.777102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.777268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.777288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.777531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.777551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.777726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.777745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.777881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.777901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.778177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.778197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.778424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.778445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.778608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.778629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.778899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.778918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.779151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.779171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.779345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.779366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.779544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.779563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.779745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.779764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.780048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.780067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.780272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.780293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.780461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.780481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.780769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.780788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.781127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.781203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.781501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.781568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.781823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.781859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.782159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.782191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.782534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.782567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.782828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.782859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.783170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.783201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.783456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.783489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.783705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.783736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.784045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.784066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.784249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.784276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.784384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.784403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.784649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.784669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.784848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.784870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.785079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.785099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.785356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.785377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.785591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.785611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.785799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.785818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.786075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.786094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.786377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.647 [2024-07-15 11:45:28.786397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.647 qpair failed and we were unable to recover it. 00:29:54.647 [2024-07-15 11:45:28.786561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.786580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.786763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.786782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.786992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.787011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.787264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.787284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.787464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.787484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.787676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.787695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.787959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.787978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.788237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.788267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.788482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.788502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.788660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.788679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.788801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.788820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.789036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.789055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.789264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.789285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.789466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.789486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.789695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.789715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.789903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.789923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.790108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.790128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.790366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.790386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.790521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.790541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.790655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.790675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.790937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.791116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.791135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.791395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.791415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.791588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.791607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.791842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.791861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.792964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.792983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.793209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.793229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.793410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.793430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.793598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.793621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.793812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.793832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.794021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.794040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.794223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.794242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.794473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.794493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.794750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.648 [2024-07-15 11:45:28.794769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.648 qpair failed and we were unable to recover it. 00:29:54.648 [2024-07-15 11:45:28.794952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.794971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.795229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.795248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.795367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.795388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.795581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.795600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.795749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.795929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.795948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.796123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.796142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.796326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.796347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.796477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.796496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.796675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.796695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.796873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.797064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.797083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.797323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.797345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.797524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.797544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.797842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.797862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.798179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.798199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.798436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.798457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.798646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.798666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.798787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.798806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.799016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.799036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.799270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.799290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.799555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.799575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.799773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.799794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.800068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.800089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.800261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.800282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.800397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.800417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.800624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.800644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.800851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.649 [2024-07-15 11:45:28.800872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.649 qpair failed and we were unable to recover it. 00:29:54.649 [2024-07-15 11:45:28.801112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.801132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.801269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.801290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.801523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.801543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.801746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.801765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.801987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.802006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.802249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.802278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.802459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.802483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.802741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.802761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.802957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.802978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.803073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.803092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.803324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.803345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.803466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.803486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.803718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.803738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.803912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.803932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.804092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.804111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.804292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.804313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.804554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.804575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.804776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.804796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.805082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.805104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.805349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.805371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.805542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.805562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.805839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.805860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.806036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.806056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.806171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.806191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.806381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.806403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.806649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.806669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.806879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.806901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.807165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.807361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.807490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.807736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.807885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.807991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.808222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.808355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.808497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.808728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.808932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.808954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.809150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.809171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.809437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.650 [2024-07-15 11:45:28.809459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.650 qpair failed and we were unable to recover it. 00:29:54.650 [2024-07-15 11:45:28.809667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.809687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.809943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.809963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.810134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.810153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.810341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.810361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.810592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.810612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.810811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.810831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.811123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.811142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.811262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.811283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.811468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.811488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.811718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.811737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.811962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.811981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.812283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.812304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.812413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.812433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.812634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.812654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.812850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.812870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.813042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.813062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.813262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.813282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.813478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.813498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.813678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.813958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.813978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.814174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.814194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.814423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.814444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.814575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.814595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.814700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.814719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.814848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.814868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.815062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.815250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.815443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.815746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.815986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.816005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.816185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.816204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.816423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.816443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.816627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.816831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.816851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.651 qpair failed and we were unable to recover it. 00:29:54.651 [2024-07-15 11:45:28.817130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-15 11:45:28.817150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.817390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.817410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.817644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.817663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.817823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.817842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.818103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.818121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.818302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.818323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.818503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.818522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.818665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.818685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.818960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.818979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.819158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.819177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.819409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.819430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.819626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.819645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.819764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.819784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.820069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.820088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.820300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.820320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.820523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.820543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.820820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.820838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.821155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.821174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.821404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.821424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.821660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.821680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.821861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.821880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.822146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.822166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.822353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.822373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.822498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.822518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.822644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.822663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.822876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.822896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.823058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.823077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.823267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.823288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.823550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.823570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.823738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.823758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.823954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.823973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.824266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.824286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.824447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.824466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.824699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.824718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.824950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.824969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.825172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.825191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.825294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.825314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.825439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.825459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.825642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.825667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.825927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-15 11:45:28.825946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.652 qpair failed and we were unable to recover it. 00:29:54.652 [2024-07-15 11:45:28.826227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.826246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.826449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.826469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.826595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.826614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.826880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.826900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.827116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.827135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.827306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.827327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.827625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.827645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.827856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.827876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.828054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.828074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.828309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.828329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.828535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.828554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.828843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.828863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.828987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.829007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.829267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.829304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.829417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.829437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.829675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.829694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.829907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.829927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.830127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.830146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.830387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.830407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.830613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.830634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.830816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.830835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.831125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.831145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.831334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.831355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.831557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.831576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.831768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.831787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.831978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.832171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.832190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.832354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.832374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.832567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.832587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.832800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.832820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.833060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.833079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.833268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.833289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.833484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.833503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.833676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.833696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.833987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.834006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.834216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.834235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.653 [2024-07-15 11:45:28.834381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-15 11:45:28.834401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.653 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.834615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.834635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.834816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.834838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.835061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.835081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.835251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.835280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.835448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.835468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.835712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.835731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.835828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.835847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.836020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.836040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.836237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.836264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.836515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.836535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.836786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.836806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.837906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.837925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.838180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.838200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.838430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.838450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.838622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.838642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.838767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.838786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.839054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.839074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.839202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.839435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.839455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.839561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.839580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.839843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.839863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.840151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.840170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.840416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.840437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.840620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.840639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.840870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.840889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.840997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.654 [2024-07-15 11:45:28.841017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.654 qpair failed and we were unable to recover it. 00:29:54.654 [2024-07-15 11:45:28.841265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.841285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.841463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.841483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.841610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.841630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.841880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.841899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.842063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.842082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.842371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.842392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.842519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.842701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.842720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.842913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.842933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.843106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.843126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.843302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.843326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.843562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.843582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.843811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.843833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.844105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.844126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.844238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.844264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.844442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.844578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.844597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.844835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.845098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.845119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.845281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.845302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.845578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.845598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.845811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.845832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.846108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.846128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.846365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.846387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.846571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.846592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.846839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.846859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.847116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.847136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.847308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.847328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.847516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.847535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.847647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.847667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.847776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.847797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.848098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.848118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.848375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.848396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.848615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.655 [2024-07-15 11:45:28.848635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.655 qpair failed and we were unable to recover it. 00:29:54.655 [2024-07-15 11:45:28.848904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.848925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.849197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.849456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.849476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.849743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.849764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.850013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.850033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.850198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.850218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.850418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.850440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.850653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.850675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.851018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.851041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.851226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.851246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.851450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.851471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.851784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.851805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.852054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.852075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.852335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.852356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.852473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.852493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.852678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.852699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.853865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.853884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.854874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.854987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.855815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.855835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.656 qpair failed and we were unable to recover it. 00:29:54.656 [2024-07-15 11:45:28.856066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.656 [2024-07-15 11:45:28.856085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.856203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.856222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.856410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.856432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.856616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.856636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.856743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.856763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.856990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.857010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.857179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.857421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.857442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.857622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.857642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.857806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.857825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.858116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.858135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.858313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.858334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.858517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.858537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.858719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.858739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.859051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.859071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.859209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.859245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.859442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.859463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.859701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.859721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.859905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.859923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.860127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.860147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.860349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.860369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.860613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.860637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.860781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.860801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.861057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.861076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.861326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.861346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.861607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.861626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.861795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.861814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.861989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.862136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.862333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.862451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.862662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.862804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.862824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.863053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.863072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.863273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.863294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.863418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.863438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.863617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.657 [2024-07-15 11:45:28.863636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.657 qpair failed and we were unable to recover it. 00:29:54.657 [2024-07-15 11:45:28.863747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.863766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.863969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.863988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.864176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.864196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.864454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.864475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.864666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.864685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.864807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.864826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.864955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.864975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.865162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.865182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.865365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.865384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.865509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.865530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.865765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.865786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.865904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.865925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.866905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.866925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.867019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.867039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.867269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.867290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.867426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.867446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.867629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.867649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.867837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.867857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.868052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.868071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.868334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.868360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.868523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.868542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.868736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.868756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.868873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.868893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.869022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.869042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.869301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.869321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.869564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.869584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.869772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.869792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.870101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.870122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.870379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.870400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.870636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.870656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.870870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.870891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.871078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.871098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.871235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.871274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.871490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.871511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.658 [2024-07-15 11:45:28.871691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.658 [2024-07-15 11:45:28.871711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.658 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.871916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.871936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.872198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.872218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.872555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.872577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.872688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.872708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.872971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.872991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.873191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.873211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.873381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.873402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.873565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.873584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.873764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.873784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.873885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.873904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.874017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.874037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.874290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.659 [2024-07-15 11:45:28.874328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.874353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.659 [2024-07-15 11:45:28.874383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.659 [2024-07-15 11:45:28.874402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.659 [2024-07-15 11:45:28.874407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.659 [2024-07-15 11:45:28.874418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.874606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.874641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.659 [2024-07-15 11:45:28.874572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.874683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:54.659 [2024-07-15 11:45:28.874810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:54.659 [2024-07-15 11:45:28.874816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:54.659 [2024-07-15 11:45:28.874952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.874983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.875163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.875185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.875371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.875392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.875527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.875547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.875760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.875780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.876001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.876020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.876226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.876246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.876458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.659 [2024-07-15 11:45:28.876479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.659 qpair failed and we were unable to recover it. 00:29:54.659 [2024-07-15 11:45:28.876681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.876706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.877035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.877056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.877310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.877331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.877463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.877484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.877735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.877756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.877954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.877973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.878267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.878288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.878510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.878530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.878697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.878717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.878909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.878929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.879224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.879243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.879470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.879490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.879620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.879639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.879815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.879835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.880043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.880063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.880174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.880194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.880382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.880403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.880683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.880703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.880909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.880928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.881036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.881056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.881363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.881383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.881519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.881539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.881708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.881728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.881913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.881933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.882188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.882207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.882393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.882413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.882655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.882675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.882893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.882913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.883100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.883120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.883413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.883434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.883698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.883719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.883960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.883979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.884213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.884233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.884416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.884437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.884642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.884662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.884938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.884959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.885078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.885098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.660 qpair failed and we were unable to recover it. 00:29:54.660 [2024-07-15 11:45:28.885284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.660 [2024-07-15 11:45:28.885305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.885568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.885588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.885780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.885800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.885975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.885999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.886115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.886136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.886361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.886382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.886498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.886518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.886711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.886731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.886856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.886877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.887965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.888075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.888095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.888419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.888442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.888706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.888726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.888987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.889008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.889104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.889124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.889405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.889444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.889738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.889758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.889937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.889957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.890217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.890237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.890588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.890611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.890794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.890815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.891132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.891153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.891321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.891342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.891479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.891500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.891762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.891782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.892071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.892092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.892293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.892315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.892507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.892527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.892727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.892748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.892868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.892888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.893014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.661 [2024-07-15 11:45:28.893034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.661 qpair failed and we were unable to recover it. 00:29:54.661 [2024-07-15 11:45:28.893226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.893246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.893457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.893479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.893710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.893730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.893860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.893880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.894063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.894084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.894266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.894287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.894397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.894417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.894548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.894572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.894848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.894868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.895104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.895126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.895398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.895540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.895560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.895739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.895760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.895928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.895948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.896179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.896199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.896483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.896506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.896692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.896981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.897237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.897266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.897484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.897504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.897680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.897699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.897855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.897875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.898180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.898200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.898365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.898583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.898603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.898794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.898815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.898919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.898938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.899785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.899806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.900049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.900070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.900231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.900251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.900451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.900472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.900606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.900627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.900803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.900825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.662 qpair failed and we were unable to recover it. 00:29:54.662 [2024-07-15 11:45:28.901083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.662 [2024-07-15 11:45:28.901104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.901305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.901327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.901567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.901588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.901796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.901816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.902021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.902042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.902313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.902334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.902518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.902539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.902703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.902723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.902973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.902993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.903118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.903144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.903404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.903425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.903587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.903607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.903702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.903721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.903984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.904004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.904111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.904131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.904337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.904358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.904594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.904615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.904729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.904749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.905009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.905029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.905310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.905332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.905609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.905631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.905812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.906102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.906315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.906336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.906547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.906568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.906739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.906758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.907064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.907346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.907495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.907700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.907871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.907983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.908002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.908179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.908199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.908394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.908415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.663 [2024-07-15 11:45:28.908601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.663 [2024-07-15 11:45:28.908622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.663 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.908805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.908826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.909145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.909166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.909358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.909378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.909547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.909568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.909756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.909776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.910085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.910106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.910353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.910374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.910573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.910593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.910714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.910735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.910920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.910940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.911186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.911206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.911390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.911411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.911674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.911694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.911866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.911885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.911998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.912025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.912294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.912315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.912463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.912483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.912666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.912685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.912863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.912883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.912994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.913015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.913214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.913234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.913415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.913436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.913570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.913591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.913818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.913838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.914020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.914040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.914161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.914181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.914416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.914437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.914672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.914693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.914934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.914954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.915124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.664 [2024-07-15 11:45:28.915144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.664 qpair failed and we were unable to recover it. 00:29:54.664 [2024-07-15 11:45:28.915380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.915401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.915598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.915618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.915827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.915847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.916032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.916052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.916236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.916272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.916489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.916509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.916720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.916740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.916942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.916961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.917169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.917189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.917369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.917390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.917681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.917701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.917996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.918017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.918313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.918334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.918504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.918524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.918755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.918774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.918984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.919004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.919221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.919241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.919456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.919477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.919669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.919689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.919854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.919874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.920144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.920165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.920376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.920397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.920598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.920617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.920741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.920760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.920888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.920911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.921077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.921097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.921279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.921300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.921504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.921524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.921812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.921833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.922039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.922059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.922329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.922350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.922536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.922558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.922748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.922767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.923035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.665 [2024-07-15 11:45:28.923055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.665 qpair failed and we were unable to recover it. 00:29:54.665 [2024-07-15 11:45:28.923174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.923193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.923359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.923380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.923567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.923587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.923706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.923726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.923988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.924009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.924271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.924292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.924481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.924501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.924771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.924791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.925009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.925029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.925241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.925268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.925538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.925558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.925733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.925753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.925877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.925897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.926075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.926331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.926353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.926621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.926642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.926912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.926932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.927134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.927274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.927295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.927530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.927552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.927666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.927685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.927869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.927890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.928176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.928197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.928463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.928484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.928726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.928748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.928947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.928966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.929097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.929117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.929305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.929327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.929562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.929582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.929744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.929763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.930025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.930049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.930271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.930292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.930541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.930561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.930795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.930815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.931095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.931115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.931425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.931446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.931698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.931718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.931969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.931989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.932221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.666 [2024-07-15 11:45:28.932241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.666 qpair failed and we were unable to recover it. 00:29:54.666 [2024-07-15 11:45:28.932442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.932462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.932556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.932575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.932834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.932854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.933020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.933040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.933331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.933352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.933541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.933561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.933677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.933696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.934001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.934020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.934310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.934331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.934602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.934622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.934908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.934928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.935193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.935213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.935413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.935435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.935610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.935630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.935844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.935863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.936112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.936132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.936401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.936421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.936678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.936697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.936975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.937270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.937291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.937473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.937493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.937725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.937744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.938031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.938050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.938285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.938306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.938450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.938470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.938730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.938917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.938937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.939236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.939262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.939457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.939477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.939683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.939703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.939817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.939836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.940059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.940082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.940227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.940248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.667 [2024-07-15 11:45:28.940380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.667 [2024-07-15 11:45:28.940401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.667 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.940662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.940683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.940806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.940825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.940943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.940963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.941162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.941182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.941398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.941419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.941613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.941633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.941837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.941857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.942148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.942168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.942461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.942483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.942721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.942741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.942962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.942982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.943181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.943202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.943389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.943410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.943694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.943714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.943953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.943973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.944147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.944167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.944326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.944347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.944456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.944476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.944675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.944695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.944866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.944885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.945177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.945197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.945380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.945401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.945524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.945545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.945807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.945827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.945999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.946019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.946223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.946243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.946556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.946577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.946711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.946731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.946920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.946941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.947110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.947129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.947397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.947419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.947538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.668 [2024-07-15 11:45:28.947557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.668 qpair failed and we were unable to recover it. 00:29:54.668 [2024-07-15 11:45:28.947671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.669 [2024-07-15 11:45:28.947691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.669 qpair failed and we were unable to recover it. 00:29:54.669 [2024-07-15 11:45:28.947856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.669 [2024-07-15 11:45:28.947875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.669 qpair failed and we were unable to recover it. 00:29:54.669 [2024-07-15 11:45:28.948041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.669 [2024-07-15 11:45:28.948061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.669 qpair failed and we were unable to recover it. 00:29:54.669 [2024-07-15 11:45:28.948241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.948267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.948436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.948457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.948635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.948659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.948820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.949001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.949021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.949209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.949229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.949414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.949435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.949599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.949619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.949835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.949855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.950039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.950059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.950270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.950291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.950457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.950477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.950785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.950807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.951038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.951059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.951164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.951185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.951394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.674 [2024-07-15 11:45:28.951431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.674 qpair failed and we were unable to recover it. 00:29:54.674 [2024-07-15 11:45:28.951546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.951566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.951825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.951845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.952730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.952991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.953013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.953196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.953217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.953397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.953417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.953523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.953543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.953642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.953662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.954005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.954027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.954217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.954238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.954458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.954479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.954591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.954611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.954900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.954920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.955092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.955112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.955287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.955308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.955571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.955590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.955827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.955846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.956108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.956128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.956403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.956424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.956606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.956626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.956797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.956818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.957056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.957085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.957305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.957326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.957522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.957543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.957705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.957725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.957839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.957859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.958152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.958172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.958332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.958352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.958588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.958609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.958847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.958867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.959154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.959175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.959355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.675 [2024-07-15 11:45:28.959376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.675 qpair failed and we were unable to recover it. 00:29:54.675 [2024-07-15 11:45:28.959550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.959570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.959699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.959719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.959906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.959926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.960141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.960509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.960708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.960995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.961193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.961335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.961517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.961661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.961854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.961874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.962055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.962075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.962278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.962299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.962481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.962501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.962604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.962624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.962721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.962741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.963082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.963102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.963212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.963232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.963402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.963666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.963685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.963896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.963915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.964115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.964319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.964448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.964588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.964824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.676 qpair failed and we were unable to recover it. 00:29:54.676 [2024-07-15 11:45:28.964992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.676 [2024-07-15 11:45:28.965016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.965205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.965225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.965350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.965371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.965626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.965879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.965899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.966048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.966068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.966232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.966251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.966453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.966473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.966653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.966673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.966788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.966807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.967068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.967088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.967291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.967312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.967441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.967460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.967695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.967715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.967854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.967874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.968054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.968073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.968330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.968350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.968538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.968558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.968787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.968807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.968986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.969191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.969401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.969497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.969641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.969805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.969824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.970045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.970065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.970237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.970270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.970560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.970580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.970763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.970782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.970947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.970967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.971079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.971100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.971310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.971330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.971437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.971570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.971590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.971782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.971801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.972001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.972126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.972145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.972390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.677 [2024-07-15 11:45:28.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.677 qpair failed and we were unable to recover it. 00:29:54.677 [2024-07-15 11:45:28.972574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.972594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.972852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.972871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.973907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.973935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.974045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.974064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.974275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.974295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.974410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.974430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.974540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.974559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.974765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.974784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.975066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.975203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.975403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.975591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.975889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.975996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.976015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.976246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.976273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.976388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.976408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.976608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.976627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.976851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.977148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.977168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.977397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.977418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.977583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.977603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.977752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.977771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.977960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.977980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.978242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.978279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.978510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.978533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.978715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.978734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.978900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.978919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.979090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.979109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.979367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.979388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.979570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.979589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.979710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.979729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.979907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.979926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.980104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.980123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.980303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.980324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.678 [2024-07-15 11:45:28.980507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.678 [2024-07-15 11:45:28.980527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.678 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.980728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.980747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.980958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.980976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.981156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.981175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.981380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.981401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.981578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.981598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.981769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.981788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.981907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.981926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.982038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.982058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.982168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.982187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.982350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.982370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.982607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.982627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.982925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.982945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.983153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.983172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.983368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.983388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.983571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.983590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.983780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.983800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.983994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.984171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.984367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.984563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.984751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.984955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.984976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.985155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.985174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.985432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.985453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.985722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.985742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.985892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.985911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.986185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.986204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.986509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.986530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.986706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.986725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.986900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.986923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.987146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.987165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.679 [2024-07-15 11:45:28.987400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.679 [2024-07-15 11:45:28.987420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.679 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.987585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.987604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.987790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.987809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.988018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.988037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.988220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.988239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.988526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.988546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.988752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.988771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.988917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.988936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.989118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.989138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.989409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.989429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.989695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.989714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.989898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.989917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.990100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.990118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.990379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.990399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.990573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.990592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.990822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.990841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.991070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.991090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.991268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.991289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.991455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.991474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.991725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.991745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.992836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.992856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.993137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.993289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.993554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.993769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.993901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.993998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.994019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.994192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.994212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.994464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.994485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.994676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.994695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.994816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.994836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.995033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.995053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.995361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.995381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.995566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.995589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.995818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.995838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.996120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.680 [2024-07-15 11:45:28.996140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.680 qpair failed and we were unable to recover it. 00:29:54.680 [2024-07-15 11:45:28.996393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.996413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.996646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.996665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.996828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.996847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.997041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.997061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.997219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.997239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.997506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.997585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.997969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.998041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.998279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.998315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.998520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.998551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.998836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.998868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.999163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.999195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.999413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.999435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.999602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.999621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:28.999874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:28.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.000150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.000170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.000412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.000432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.000690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.000710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.000876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.000896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.001024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.001325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.001345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.001626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.001645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.001834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.001853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.002085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.002104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.002272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.002293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.002514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.002708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.002727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.002972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.002991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.003176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.003195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.003382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.003402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.003568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.003588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.003766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.003785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.003906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.003925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.004912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.004935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.005214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.005235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.005587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.005628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.005933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.681 [2024-07-15 11:45:29.005966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.681 qpair failed and we were unable to recover it. 00:29:54.681 [2024-07-15 11:45:29.006106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.006137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.006406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.006439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.006744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.006775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.007042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.007073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.007304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.007338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.007498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.007530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.007743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.007774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7390000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.008001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.008022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.008311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.008331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.008566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.008689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.008709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.008887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.008907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.009192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.009211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.009340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.009594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.009612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.009790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.009809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.009910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.009929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.010048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.010067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.010171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.010191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.010295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.010316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.010562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.010743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.010763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.011074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.011093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.011404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.011424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.011718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.011738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.011994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.012013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.012236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.012272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.012444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.012464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.012654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.012673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.012898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.012917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.013151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.013171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.013426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.013446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.013561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.013581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.013715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.013734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.013926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.013945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.014192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.014211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.014386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.014410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.014607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.014627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.014748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.014768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.014881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.014901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.015016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.682 [2024-07-15 11:45:29.015035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.682 qpair failed and we were unable to recover it. 00:29:54.682 [2024-07-15 11:45:29.015210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.015229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.015426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.015446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.015652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.015672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.015913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.015932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.016938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.016957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.017222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.017242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.017423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.017443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.017674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.017693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.017933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.017952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.018123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.018143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.018335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.018356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.018611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.018630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.018816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.018835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.018968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.018988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.019261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.019282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.019513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.019533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.019628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.019648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.019850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.019870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.020969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.020989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.021190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.021209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.021506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.021527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.021652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.021671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.683 [2024-07-15 11:45:29.021866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.683 [2024-07-15 11:45:29.021886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.683 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.021999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.022178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.022323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.022448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.022631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.022831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.023148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.023345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.023365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.023603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.023622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.023916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.024179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.024199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.024441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.024461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.024639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.024659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.024776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.024795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.024974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.024993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.025228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.025247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.025456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.025476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.025673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.025692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.025855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.025874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.026065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.026085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.026341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.026361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.026620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.026639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.026808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.026827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.027137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.027156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.027410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.027626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.027645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.027910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.027929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.028089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.028109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.028303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.028323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.028501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.028521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.028758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.028777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.028959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.028978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.029225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.029244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.029464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.029485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.029665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.029684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.029811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.029830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.030016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.030289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.030309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.030531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.030550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.030809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.030828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-15 11:45:29.031108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.684 [2024-07-15 11:45:29.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.031323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.031346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.031548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.031567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.031777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.031796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.032040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.032059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.032281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.032302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.032470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.032489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.032685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.032704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.032945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.032964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.033167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.033186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.033282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.033429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.033449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.033663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.033682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.033842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.033862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.034039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.034059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.034235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.034262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.034428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.034448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.034653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.034674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.034863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.034883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.035058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.035077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.035328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.035348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.035617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.035637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.035884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.035904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.036062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.036081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.036325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.036345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.036549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.036567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.036758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.036778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.036944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.036964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.037085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.037106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.037359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.037379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.037541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.037560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.037741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.037760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.038025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.038044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.038244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.038553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.038573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.038802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.038821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.038945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.038965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.039139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.039159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.039348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.039368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.039619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.039638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.039815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.039834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-15 11:45:29.040095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.685 [2024-07-15 11:45:29.040145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.040322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.040342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.040521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.040541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.040715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.040735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.041011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.041030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.041218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.041237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.041515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.041535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.041714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.041733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.041894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.041914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.042036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.042055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.042312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.042332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.042563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.042582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.042755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.043074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.043093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.043289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.043310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.043480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.043500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.043691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.043710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.043898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.043917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.044174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.044193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.044413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.044432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.044608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.044628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.044787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.044807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.044976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.044995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.045276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.045296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.045504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.045523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.045771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.045790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.045972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.045991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.046202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.046239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.046391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.046423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.046730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.046760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.047019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.047050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.047191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.047222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.047418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.047490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.047701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.047723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.048002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.048021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.048195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.048215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.048418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.048438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.048669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.048688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.048813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.048832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.049034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.049053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.049299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.049322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-15 11:45:29.049553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.686 [2024-07-15 11:45:29.049573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.049849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.049868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.050145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.050165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.050283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.050303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.050469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.050489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.050727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.050746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.051021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.051041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.051170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.051426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.051447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.051652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.051671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.051930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.051950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.052129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.052148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.052385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.052404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.052682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.052701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.052974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.053261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.053281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.053453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.053472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.053750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.053770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.053875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.053894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.054206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.054225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.054509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.054529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.054786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.054806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.055010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.055030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.055290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.055311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.055487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.055506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.055702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.055721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.055908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.055928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.056168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.056187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.056462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.056482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.056715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.056735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.056894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.056913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.057028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.057048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.057154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.057173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.057448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.057468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.057669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.057689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.057845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.687 [2024-07-15 11:45:29.057864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.687 qpair failed and we were unable to recover it. 00:29:54.687 [2024-07-15 11:45:29.058144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.058164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.058425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.058445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.058628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.058648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.058813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.058835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.059013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.059033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.059299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.059320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.059603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.059623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.059806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.059825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.060122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.060142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.060395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.060531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.060550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.060673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.060692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.060924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.060943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.061196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.061215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.061517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.061537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.061718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.061738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.061916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.061935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.062126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.062146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.062314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.062334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.062572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.062592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.062825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.062845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.063040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.063241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.063272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.063518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.063538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.063784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.063803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.063915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.063935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.064167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.064186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.064459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.064479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.064588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.064608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.064781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.064800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.065054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.065074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.065331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.065352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.065533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.065554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.065813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.065832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.065937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.065957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.066155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.066175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.066399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.066418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.066599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.066618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.066798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.066817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.688 [2024-07-15 11:45:29.067050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.688 [2024-07-15 11:45:29.067070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.688 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.067282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.067302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.067437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.067456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.067617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.067637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.067830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.067853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.068087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.068105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.068235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.068262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.068523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.068543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.068748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.068767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.068865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.068884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.069066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.069085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.069250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.069277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.069493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.069514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.069772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.069792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.069928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.069947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.070140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.070159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.070350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.070371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.070586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.070605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.070789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.070809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.070976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.070996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.071250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.071278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.071463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.071482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.071661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.071680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.071939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.071958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.072120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.072139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.072298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.072318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.072579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.072599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.072860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.072880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.073108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.073128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.073388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.073409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.073588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.073607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.073744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.073763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.074898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.075198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.075218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.075397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.075417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.075595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.075616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.689 [2024-07-15 11:45:29.075848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.689 [2024-07-15 11:45:29.075867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.689 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.076137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.076156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.076363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.076384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.076554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.076576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.076778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.076984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.077003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.077209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.077228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.077385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.077406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.077593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.077612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.077871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.077890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.078023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.078536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.078675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.078828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.078995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.079015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.079220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.079240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.079368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.079389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.079500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.079519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.079702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.079721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.079986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.080307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.080490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.080640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.080765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.080955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.080974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.081232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.081252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.081437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.081457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.081652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.081671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.081890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.081909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.082163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.082185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.082411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.082431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.082539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.082740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.082759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.690 [2024-07-15 11:45:29.082931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.690 [2024-07-15 11:45:29.082951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.690 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.083157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.083178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.083283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.083305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.083416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.083436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.083643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.083663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.083780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.083799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.084036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.084244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.084387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.084612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.084892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.084988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.085201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.085357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.085495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.085684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.085973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.085993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.086112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.968 [2024-07-15 11:45:29.086131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.968 qpair failed and we were unable to recover it. 00:29:54.968 [2024-07-15 11:45:29.086244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.086271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.086464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.086483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.086643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.086663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.086843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.086862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.087019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.087038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.087303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.087324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.087548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.087567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.087744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.087763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.087947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.087966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.088210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.088412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.088432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.088699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.088718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.088843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.088863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.089038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.089058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.089261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.089281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.089483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.089503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.089763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.089782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.089995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.090014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.090206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.090233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.090416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.090437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.090619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.090638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.090827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.090847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.091092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.091112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.091312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.091332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.091453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.091473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.091647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.091666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.091790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.091809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.092089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.092108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.092399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.092420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.092535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.092555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.092668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.092687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.092859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.092879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.093068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.093088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.093279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.093300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.093495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.093514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.093781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.093800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.093975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.093995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.094209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.094228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.094453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.969 [2024-07-15 11:45:29.094473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.969 qpair failed and we were unable to recover it. 00:29:54.969 [2024-07-15 11:45:29.094674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.094693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.094903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.094922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.095185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.095205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.095460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.095480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.095588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.095607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.095729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.095748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.096016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.096036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.096220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.096240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.096536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.096556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.096670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.096689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.096919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.096939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.097170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.097190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.097365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.097385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.097597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.097617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.097868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.097888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.098173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.098192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.098378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.098398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.098602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.098621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.098786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.098807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.099020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.099043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.099358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.099379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.099615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.099635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.099839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.099858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.100104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.100123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.100300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.100320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.100517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.100537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.100757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.100777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.101100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.101120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.101344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.101544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.101563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.101790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.101810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.102014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.102034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.102315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.102335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.102600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.102620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.102750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.102770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.102945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.102965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.103096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.103116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.970 [2024-07-15 11:45:29.103347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.970 [2024-07-15 11:45:29.103367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.970 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.103570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.103590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.103767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.103787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.103984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.104004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.104169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.104188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.104425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.104446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.104677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.104697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.104811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.104830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.105108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.105127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.105339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.105620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.105639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.105818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.105838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.106038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.106057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.106286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.106307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.106474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.106494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.106722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.106742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.106872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.106891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.107151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.107172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.107343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.107363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.107569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.107588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.107766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.107785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.108122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.108142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.108306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.108330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.108564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.108584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.108814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.108833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.109075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.109094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.109270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.109290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.109572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.109821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.109841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.110131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.110151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.110423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.110444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.110675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.110693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.110893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.110912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.111167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.111186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.111372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.111393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.111628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.111647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.111818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.111838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.112157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.112176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.112420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.112441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.971 qpair failed and we were unable to recover it. 00:29:54.971 [2024-07-15 11:45:29.112547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.971 [2024-07-15 11:45:29.112567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.112829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.112848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.113078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.113098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.113287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.113307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.113480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.113499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.113632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.113650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.113855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.113876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.114177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.114197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.114380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.114400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.114573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.114592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.114686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.114705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.114814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.114833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.115002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.115219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.115238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.115397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.115433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.115642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.115674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.115867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.115897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.116086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.116117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.116399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.116432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.116688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.116718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.116962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.116984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.117119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.117138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.117297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.117546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.117569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.117728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.117748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.117980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.117999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.118243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.118269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.118466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.118486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.118672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.118691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.118822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.118841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.119082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.119101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.119295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.119316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.119420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.119439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.119615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.119634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.119752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.119772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.120131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.120150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.972 [2024-07-15 11:45:29.120336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.972 [2024-07-15 11:45:29.120356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.972 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.120486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.120505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.120775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.120794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.120914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.120933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.121181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.121202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.121419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.121439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.121669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.121688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.121868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.121888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.122169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.122188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.122394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.122587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.122606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.122774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.122794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.123020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.123040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.123236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.123262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.123513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.123533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.123654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.124006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.124025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.124128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.124147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.124393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.124414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.124523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.124543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.124792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.124811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.125069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.125088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.125217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.125237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.125397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.125432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.125701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.125732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.125866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.125897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.126090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.126122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.126389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.126439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.126643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.126675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.126898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.126919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.127086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.127105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.973 qpair failed and we were unable to recover it. 00:29:54.973 [2024-07-15 11:45:29.127353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.973 [2024-07-15 11:45:29.127374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.127509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.127528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.127641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.127661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.127865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.127885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.128122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.128323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.128343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.128546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.128566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.128746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.128944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.128963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.129072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.129091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.129273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.129293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.129529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.129776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.129795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.129996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.130015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.130189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.130209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.130382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.130403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.130565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.130585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.130816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.130836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.131956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.131975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.132204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.132223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.132482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.132502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.132664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.132684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.132955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.132974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.133209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.133229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.133530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.133550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.133832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.133851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.134108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.134128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.134409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.134429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.134605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.134625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.134899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.134918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.135151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.135170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.135408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.135431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.135613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.135633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.135980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.974 [2024-07-15 11:45:29.135999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.974 qpair failed and we were unable to recover it. 00:29:54.974 [2024-07-15 11:45:29.136260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.136280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.136513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.136533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.136650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.136669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.136943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.136963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.137242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.137268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.137568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.137588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.137769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.137788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.138065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.138084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.138318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.138338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.138503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.138524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.138652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.138671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.138910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.138930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.139118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.139137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.139246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.139457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.139476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.139684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.139703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.139829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.139849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.140105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.140124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.140293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.140312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.140569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.140589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.140784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.140803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.140909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.140929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.141200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.141219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.141531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.141551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.141843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.141863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.142916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.142936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.143198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.143217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.143452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.143471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.143652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.143671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.143902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.143922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.144152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.975 [2024-07-15 11:45:29.144171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.975 qpair failed and we were unable to recover it. 00:29:54.975 [2024-07-15 11:45:29.144355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.144375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.144546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.144568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.144690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.144710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.145025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.145044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.145284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.145305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.145560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.145580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.145690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.145709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.145893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.145914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.146160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.146180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.146422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.146443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.146703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.146723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.146933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.146952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.147214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.147233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.147503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.147523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.147627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.147647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.147810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.147829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.148029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.148048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.148238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.148263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.148445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.148464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.148660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.148679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.148882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.149086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.149106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.149337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.149356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.149616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.149635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.149837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.150116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.150136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.150339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.150534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.150554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.150717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.150737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.151004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.151023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.151320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.151340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.151627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.151647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.151853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.151873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.152071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.152090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.152347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.152367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.152500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.152519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.152776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.152796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.152922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.152941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.153122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.976 [2024-07-15 11:45:29.153142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.976 qpair failed and we were unable to recover it. 00:29:54.976 [2024-07-15 11:45:29.153312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.153332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.153593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.153612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.153789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.153812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.154066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.154085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.154281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.154301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.154471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.154697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.154716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.154939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.154958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.155191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.155211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.155418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.155438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.155685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.155705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.155954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.155973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.156078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.156098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.156271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.156292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.156499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.156519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.156718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.156737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.156870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.156889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.157062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.157081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.157204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.157223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.157566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.157586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.157701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.157720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.157973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.157992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.158274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.158294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.158458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.158477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.158637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.158656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.158914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.158933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.159187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.159206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.159453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.159473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.159594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.159613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.977 qpair failed and we were unable to recover it. 00:29:54.977 [2024-07-15 11:45:29.159709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.977 [2024-07-15 11:45:29.159729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.159983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.160003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.160110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.160129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.160309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.160329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.160521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.160541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.160716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.160735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.161005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.161025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.161307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.161327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.161559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.161578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.161806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.161825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.161997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.162016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.162273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.162293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.162403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.162422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.162666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.162688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.162864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.162883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.163079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.163098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.163266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.163286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.163547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.163566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.163746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.163765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.163925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.163945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.164218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.164238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.164505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.164526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.164807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.164827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.164994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.165013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.165245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.165271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.165525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.165544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.165723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.165743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.165988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.166008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.166269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.166290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.166522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.166723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.166742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.166999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.167018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.167275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.167295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.167457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.167476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.167758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.168033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.168053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.168316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.168336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.168553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.168573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.168749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.168769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.169014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.978 [2024-07-15 11:45:29.169033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.978 qpair failed and we were unable to recover it. 00:29:54.978 [2024-07-15 11:45:29.169322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.169341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.169573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.169592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.169850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.169869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.170116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.170135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.170422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.170442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.170703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.170722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.170969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.170988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.171167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.171186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.171361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.171381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.171601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.171621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.171874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.171893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.172123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.172143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.172394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.172414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.172670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.172692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.172853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.172872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.173102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.173121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.173379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.173398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.173645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.173664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.173844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.173863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.174160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.174180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.174491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.174512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.174675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.174694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.174954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.174974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.175168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.175187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.175460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.175480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.175598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.175617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.175873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.175892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.176104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.176124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.176322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.176343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.176528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.176548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.176729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.176748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.177002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.177021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.177284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.177304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.177535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.177555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.177727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.177747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.177856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.177875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.178160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.178179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.178462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.178483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.178744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.178763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.179018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.179037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.979 [2024-07-15 11:45:29.179284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.979 [2024-07-15 11:45:29.179304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.979 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.179406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.179604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.179623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.179782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.179801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.180051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.180071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.180327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.180347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.180605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.180624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.180796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.180815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.181046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.181065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.181296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.181315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.181576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.181595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.181861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.181880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.182017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.182037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.182320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.182343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.182599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.182618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.182800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.182819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.183073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.183092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.183357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.183377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.183539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.183559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.183786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.183805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.183966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.183986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.184317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.184337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.184505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.184525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.184722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.184741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.184968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.184987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.185233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.185419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.185438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.185721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.185741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.185921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.185941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.186171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.186191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.186421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.186441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.186702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.186721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.186962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.186981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.187161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.187180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.187436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.187456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.187741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.187760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.187921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.187940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.188220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.188239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.188425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.188445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.980 [2024-07-15 11:45:29.188700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.980 [2024-07-15 11:45:29.188720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.980 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.188977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.188996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.189236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.189260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.189489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.189509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.189741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.189760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.189921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.189940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.190112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.190131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.190389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.190410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.190678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.190697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.190956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.190975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.191231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.191251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.191419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.191439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.191742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.191761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.191992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.192011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.192120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.192143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.192336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.192356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.192640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.192659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.192907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.192926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.193184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.193203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.193378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.193398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.193510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.193529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.193810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.193829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.193993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.194013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.194210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.194229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.194508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.194529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.194787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.194807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.194979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.194998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.195288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.195308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.195597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.195616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.195816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.195835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.196092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.196111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.196271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.196291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.196580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.196599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.196826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.196845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.197076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.197095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.197275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.197296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.197525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.197544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.197773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.197792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.198065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.198084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.198209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.198228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.198504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.198525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.981 [2024-07-15 11:45:29.198787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-15 11:45:29.198807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.981 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.199012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.199031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.199290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.199311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.199475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.199494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.199797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.199816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.200045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.200065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.200345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.200365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.200559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.200578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.200806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.200824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.201031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.201049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.201210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.201229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.201343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.201363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.201629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.201648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.201809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.201828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.202119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.202139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.202342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.202363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.202612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.202631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.202809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.202829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.203974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.203993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.204222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.204241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.204344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.204364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.204654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.204673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.204786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.204806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.205043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.205062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.205298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.205480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.205499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.205754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.205773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.206002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.206022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.206277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-15 11:45:29.206297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.982 qpair failed and we were unable to recover it. 00:29:54.982 [2024-07-15 11:45:29.206409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.206428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.206538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.206557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.206795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.206814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.207048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.207067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.207267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.207287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.207579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.207598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.207771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.207793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.207996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.208015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.208271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.208292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.208453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.208472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.208672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.208691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.208933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.209208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.209227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.209393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.209412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.209714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.209734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.209895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.209915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.210079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.210098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.210373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.210392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.210650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.210669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.210832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.210851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.211098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.211118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.211279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.211299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.211538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.211557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.211715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.211734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.211985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.212004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.212238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.212264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.212473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.212492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.212651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.212670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.212968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.212987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.213234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.213252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.213495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.213515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.213750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.213770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.214033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.214053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.214280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.214303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.214553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.214573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.214771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.214790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.214986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.215004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.215200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.215219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.215430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.215450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.983 [2024-07-15 11:45:29.215648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-15 11:45:29.215667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.983 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.215949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.215969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.216141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.216161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.216342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.216362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.216545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.216565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.216741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.216760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.217048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.217283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.217306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.217488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.217507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.217707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.217726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.217962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.217981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.218221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.218241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.218415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.218436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.218716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.218736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.218919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.218938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.219054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.219073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.219278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.219299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.219470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.219489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.219799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.219819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.220074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.220095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.220270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.220290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.220492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.220512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.220678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.220697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.220973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.220993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.221155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.221174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.221455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.221475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.221656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.221676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.221943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.221963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.222176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.222195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.222361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.222382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.222501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.222520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.222741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.222760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.223042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.223242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.223421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.223557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.223808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.223998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.224017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.224330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.224351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.984 qpair failed and we were unable to recover it. 00:29:54.984 [2024-07-15 11:45:29.224601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.984 [2024-07-15 11:45:29.224620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.224816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.224836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.225878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.225897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.226127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.226150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.226405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.226425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.226699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.226863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.226883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.227136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.227156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.227406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.227426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.227588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.227608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.227804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.227824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.228031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.228051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.228225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.228244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.228423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.228443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.228636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.228656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.228829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.228848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.229123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.229143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.229270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.229291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.229494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.229513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.229691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.229711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.229986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.230005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.230169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.230188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.230440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.230459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.230690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.230709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.230907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.230928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.231147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.231166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.231424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.231445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.231699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.231718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.231829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.231848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.232907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.232926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.233204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.985 [2024-07-15 11:45:29.233223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.985 qpair failed and we were unable to recover it. 00:29:54.985 [2024-07-15 11:45:29.233433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.233453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.233555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.233574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.233803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.233821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.234010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.234029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.234264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.234284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.234484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.234503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.234678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.234696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.234905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.234928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.235094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.235114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.235286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.235307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.235427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.235446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.235557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.235576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.235757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.235777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.236074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.236094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.236298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.236318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.236521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.236540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.236771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.236790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.236996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.237185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.237385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.237531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.237758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.237974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.237993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.238188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.238207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.238437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.238457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.238644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.238663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.238931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.238950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.239153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.239172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.239363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.239384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.239675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.239695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.239930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.239950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.240115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.240135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.240418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.240438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.240666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.986 [2024-07-15 11:45:29.240685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.986 qpair failed and we were unable to recover it. 00:29:54.986 [2024-07-15 11:45:29.240799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.240995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.241186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.241394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.241523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.241747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.241873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.241893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.242058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.242078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.242187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.242207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.242385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.242405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.242597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.242616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.242899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.242919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.243024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.243043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.243295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.243319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.243420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.243439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.243696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.243715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.243907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.243926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.244021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.244040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.244221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.244240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.244484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.244503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.244681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.244700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.244897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.244916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.245145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.245164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.245352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.245372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.245651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.245671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.245912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.245930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.246088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.246108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.246394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.246414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.246622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.246642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.246763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.246783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.246962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.246981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.247209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.247229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.247449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.247469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.247630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.247650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.247948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.247967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.248128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.248147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.248388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.248408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.248722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.248741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.248864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.248884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.249136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.987 [2024-07-15 11:45:29.249155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.987 qpair failed and we were unable to recover it. 00:29:54.987 [2024-07-15 11:45:29.249348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.249369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.249575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.249595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.249853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.249872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.250133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.250152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.250412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.250432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.250668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.250688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.250888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.251137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.251156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.251330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.251350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.251598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.251617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.251922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.251941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.252225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.252244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.252530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.252550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.252738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.252761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.252935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.252954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.253126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.253145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.253388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.253408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.253591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.253610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.253880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.253899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.254135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.254154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.254318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.254338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.254461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.254481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.254764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.254783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.255050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.255069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.255320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.255340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.255506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.255526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.255707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.255727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.255987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.256007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.256292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.256313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.256514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.256534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.256767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.256787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.256970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.256989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.257159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.257178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.257478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.257498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.257734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.257752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.258018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.258260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.258279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.258403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.258422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.258613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.258632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.258805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.258825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.259057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.988 [2024-07-15 11:45:29.259077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.988 qpair failed and we were unable to recover it. 00:29:54.988 [2024-07-15 11:45:29.259400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.259420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.259654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.259674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.259929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.260222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.260241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.260438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.260457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.260717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.260737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.260842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.260861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.261130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.261353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.261374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.261637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.261659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.261907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.261927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.262106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.262126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.262285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.262309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.262557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.262577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.262756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.262775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.263067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.263086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.263349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.263369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.263550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.263569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.263730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.263749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.263928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.263947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.264195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.264214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.264499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.264518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.264774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.264794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.265079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.265098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.265289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.265309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.265520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.265539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.265740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.265760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.265931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.266206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.266225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.266400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.266420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.266539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.266558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.266823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.266842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.267122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.267141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.267402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.267422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.267533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.267552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.267714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.267733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.267972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.267991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.268267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.268287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.268466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.268486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.989 [2024-07-15 11:45:29.268661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.989 [2024-07-15 11:45:29.268680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.989 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.268951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.268971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.269207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.269227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.269449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.269469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.269636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.269655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.269934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.270120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.270139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.270276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.270297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.270484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.270503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.270684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.270703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.270903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.270922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.271210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.271229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.271423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.271443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.271564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.271587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.271775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.271795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.271966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.271985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.272192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.272211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.272500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.272520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.272700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.273002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.273022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.273223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.273242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.273519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.273539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.273699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.273719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.273973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.273992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.274246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.274272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.274434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.274454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.274579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.274598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.274760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.274779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.274887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.274905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.275164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.275184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.275415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.275436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.275604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.275623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.275799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.275818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.275988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.276007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.276246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.276272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.990 qpair failed and we were unable to recover it. 00:29:54.990 [2024-07-15 11:45:29.276480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.990 [2024-07-15 11:45:29.276499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.276776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.276795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.277078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.277097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.277353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.277373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.277470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.277489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.277817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.277873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.278192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.278225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9d70 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.278527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.278566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7388000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.278865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.278887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.279153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.279172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.279432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.279452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.279616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.279636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.279934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.279953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.280187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.280206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.280376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.280396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.280626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.280646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.280739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.280758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.281025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.281044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.281278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.281302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.281541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.281561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.281823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.281842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.282080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.282100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.282360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.282379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.282617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.282636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.282878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.282897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.283129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.283148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.283379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.283399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.283662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.283681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.283953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.283972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.284231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.284251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.284462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.284481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.284711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.284730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.285017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.285036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.285267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.285286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.285447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.285467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.285723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.285742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.285986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.286005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.286221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.286240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.286506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.286526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.286772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.287020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.287039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.991 qpair failed and we were unable to recover it. 00:29:54.991 [2024-07-15 11:45:29.287294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.991 [2024-07-15 11:45:29.287315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.287511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.287530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.287705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.287724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.287905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.288074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.288094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.288353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.288374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.288609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.288628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.288866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.288885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.289078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.289097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.289389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.289409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.289652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.289672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.289836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.289856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.290112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.290131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.290330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.290349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.290578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.290597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.290850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.290869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.291050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.291519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.291639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.291841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.291999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.292018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.292276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.292607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.292627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.292882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.292900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.293150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.293170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.293351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.293371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.293541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.293560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.293789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.294040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.294167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.294436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.294614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.294991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.295011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.295276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.295296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.295529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.295549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.295786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.295805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.296064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.296085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.296300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.296320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.296579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.992 [2024-07-15 11:45:29.296599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.992 qpair failed and we were unable to recover it. 00:29:54.992 [2024-07-15 11:45:29.296860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.296878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.297168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.297188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.297373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.297394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.297651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.297671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.297845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.297864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.298067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.298087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.298350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.298371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.298601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.298621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.298867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.298886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.299143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.299162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.299450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.299470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.299636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.299655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.299814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.299833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.300012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.300031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.300232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.300251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.300456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.300476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.300703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.300726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.300972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.300992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.301221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.301240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.301529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.301548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.301796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.301815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.302101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.302120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.302333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.302353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.302610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.302629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.302890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.302909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.303160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.303180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.303291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.303312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.303489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.303508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.303766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.303785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.304022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.304040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.304216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.304236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.304518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.304538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.304713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.304732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.304911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.304931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.305159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.305178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.305434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.305454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.305706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.305725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.305953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.305972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.306132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.306152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.306315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.306336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.306596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.993 [2024-07-15 11:45:29.306615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.993 qpair failed and we were unable to recover it. 00:29:54.993 [2024-07-15 11:45:29.306874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.307035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.307054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.307308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.307330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.307598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.307618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.307790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.307809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.308935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.308954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.309211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.309230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.309416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.309436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.309692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.309711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.309966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.309986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.310176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.310195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.310363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.310384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.310572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.310592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.310753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.310772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.311010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.311281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.311301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.311547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.311566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.311744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.312040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.312060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.312242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.312269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.312515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.312535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.312765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.312785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.312948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.312968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.313151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.313170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.313419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.313440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.313607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.313627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.313823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.313842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.314074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.314094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.314294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.314314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.314602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.314622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.314855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.314874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.315142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.315162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.315331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.315351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.315538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.315557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.315731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.315750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.994 [2024-07-15 11:45:29.315997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.994 [2024-07-15 11:45:29.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.994 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.316193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.316212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.316417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.316440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.316712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.316732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.316946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.316965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.317080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.317100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.317307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.317327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.317559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.317578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.317775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.317975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.317994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.318208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.318227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.318419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.318440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.318607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.318626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.318873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.318892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.319053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.319073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.319242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.319278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.319387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.319407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.319596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.319615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.319819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.319838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.320096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.320115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.320279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.320300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.320417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.320436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.320666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.320686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.320784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.320804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.321829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.321849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.322009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.322028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.322277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.322297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.322409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.322428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.322631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.322650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.322821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.322841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.995 qpair failed and we were unable to recover it. 00:29:54.995 [2024-07-15 11:45:29.323071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.995 [2024-07-15 11:45:29.323090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.323267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.323286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.323404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.323424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.323605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.323625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.323804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.323823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.323930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.323949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.324111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.324131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.324268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.324294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.324456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.324476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.324738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.324757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.324929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.324949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.325943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.325962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.326190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.326208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.326468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.326489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.326666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.326685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.326878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.326898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.327923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.328943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.328961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.329150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.329348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.329492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.329701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.329983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.330182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.330202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.330387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.330408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.996 qpair failed and we were unable to recover it. 00:29:54.996 [2024-07-15 11:45:29.330586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.996 [2024-07-15 11:45:29.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.330849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.330869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.331099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.331118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.331307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.331328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.331488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.331507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.331734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.331754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.331925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.331947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.332912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.332931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.333120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.333139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.333421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.333442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.333616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.333635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.333939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.333958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.334134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.334153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.334362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.334384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.334552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.334573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.334738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.334758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.334931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.334951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.335128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.335149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.335277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.335298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.335469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.335489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.335686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.335707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.335966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.335986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.336161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.336181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.336429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.336450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.336558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.336578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.336698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.336896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.336917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.337925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.337945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.338149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.338171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.338283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.997 [2024-07-15 11:45:29.338305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.997 qpair failed and we were unable to recover it. 00:29:54.997 [2024-07-15 11:45:29.338422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.338442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.338604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.338905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.338925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.339932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.339953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.340917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.340939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.341118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.341139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.341246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.341274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.341513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.341534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.341700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.341720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.341887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.342115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.342312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.342527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.342643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.342759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.342987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.343178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.343375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.343492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.343609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.343787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.343806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.344061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.344080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.344252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.344278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.344444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.344464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.344772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.344791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.344903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.344922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.345156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.345176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.345407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.345427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.345686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.345705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.345859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.345878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.346050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.998 [2024-07-15 11:45:29.346068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.998 qpair failed and we were unable to recover it. 00:29:54.998 [2024-07-15 11:45:29.346233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.346251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.346439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.346459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.346570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.346589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.346765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.346785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.346944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.346964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.347140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.347162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.347337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.347358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.347539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.347558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.347664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.347683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.347846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.347865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.348038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.348057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.348217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.348237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.348489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.348508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.348714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.348734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.348873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.348893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.349084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.349104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.349282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.349302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.349489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.349509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.349748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.349767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.349937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.349956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.350049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.350068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.350176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.350195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.350398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.350418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.350681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.350700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.350876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.350895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.351885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.351904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.352064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.352083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.352194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.352214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.352310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.352330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.352589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.999 [2024-07-15 11:45:29.352608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:54.999 qpair failed and we were unable to recover it. 00:29:54.999 [2024-07-15 11:45:29.352700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.352719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.352895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.352913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.353107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.353126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.353382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.353565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.353585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.353704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.353888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.353908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.354086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.354106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.354335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.354355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.354538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.354557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.354813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.354836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.355884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.355903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.356979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.356998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.357245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.357272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.357383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.357403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.357567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.357586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.357682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.357701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.357930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.357949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.358137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.358156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.358283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.358303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.358533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.358553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.358679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.358698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.358929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.358948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.000 [2024-07-15 11:45:29.359926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.000 [2024-07-15 11:45:29.359946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.000 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.360123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.360142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.360303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.360324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.360417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.360436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.360616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.360635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.360868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.360888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.361970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.361989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.362219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.362238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.362369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.362389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.362543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.362562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.362653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.362672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.362928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.362947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.363176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.363195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.363306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.363510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.363530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.363628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.363648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.363910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.363929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.364912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.364932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.365901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.365921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.366122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.366142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.366302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.366323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.366484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.366503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.366692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.366881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.366900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.367061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.367080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.367240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.367266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.001 qpair failed and we were unable to recover it. 00:29:55.001 [2024-07-15 11:45:29.367440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.001 [2024-07-15 11:45:29.367459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.367577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.367597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.367829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.367848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.367951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.367970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.368974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.368992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.369085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.369105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.369337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.369356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.369553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.369572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.369690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.369710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.369879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.369899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.370065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.370188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.370207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.370375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.370395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.370576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.370595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.370852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.370873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.371930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.371949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.372798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.372988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.373963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.373982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.002 qpair failed and we were unable to recover it. 00:29:55.002 [2024-07-15 11:45:29.374094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.002 [2024-07-15 11:45:29.374113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.374304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.374324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.374419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.374438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.374599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.374618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.374729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.374748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.374921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.374941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.375858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.375877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.376972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.376991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.377150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.377169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.377362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.377382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.377593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.377774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.377793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.377956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.377976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.378100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.378119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 [2024-07-15 11:45:29.378218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.378237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7398000b90 with addr=10.0.0.2, port=4420 00:29:55.003 qpair failed and we were unable to recover it. 00:29:55.003 A controller has encountered a failure and is being reset. 00:29:55.003 [2024-07-15 11:45:29.378571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.003 [2024-07-15 11:45:29.378633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c7e60 with addr=10.0.0.2, port=4420 00:29:55.003 [2024-07-15 11:45:29.378661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c7e60 is same with the state(5) to be set 00:29:55.003 [2024-07-15 11:45:29.378696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7e60 (9): Bad file descriptor 00:29:55.003 [2024-07-15 11:45:29.378723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.003 [2024-07-15 11:45:29.378746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.003 [2024-07-15 11:45:29.378769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.003 Unable to reset the controller. 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 Malloc0 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 [2024-07-15 11:45:29.621229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 [2024-07-15 11:45:29.653851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.263 11:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2971664 00:29:56.199 Controller properly reset. 00:30:00.388 Initializing NVMe Controllers 00:30:00.388 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:00.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:00.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:00.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:00.388 Initialization complete. Launching workers. 00:30:00.388 Starting thread on core 1 00:30:00.388 Starting thread on core 2 00:30:00.388 Starting thread on core 3 00:30:00.388 Starting thread on core 0 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:00.388 00:30:00.388 real 0m11.371s 00:30:00.388 user 0m35.825s 00:30:00.388 sys 0m5.665s 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.388 ************************************ 00:30:00.388 END TEST nvmf_target_disconnect_tc2 00:30:00.388 ************************************ 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.388 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.388 rmmod nvme_tcp 00:30:00.388 rmmod nvme_fabrics 00:30:00.646 rmmod nvme_keyring 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2972208 ']' 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2972208 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2972208 ']' 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2972208 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2972208 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2972208' 00:30:00.646 killing process with pid 2972208 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2972208 00:30:00.646 11:45:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2972208 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.905 11:45:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.442 11:45:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:03.442 00:30:03.442 real 0m20.192s 00:30:03.442 user 1m2.195s 00:30:03.442 sys 0m10.923s 00:30:03.442 11:45:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.442 11:45:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:03.442 ************************************ 00:30:03.442 END TEST nvmf_target_disconnect 00:30:03.442 ************************************ 00:30:03.442 11:45:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:03.442 11:45:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:03.442 11:45:37 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.442 11:45:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.442 11:45:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:03.442 00:30:03.442 real 23m20.680s 00:30:03.442 user 51m39.778s 00:30:03.442 sys 6m45.974s 00:30:03.442 11:45:37 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.442 11:45:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.442 ************************************ 00:30:03.442 END TEST nvmf_tcp 00:30:03.442 ************************************ 00:30:03.442 11:45:37 -- common/autotest_common.sh@1142 -- # return 0 00:30:03.442 11:45:37 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:03.442 11:45:37 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:03.442 11:45:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:03.442 11:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.442 11:45:37 -- common/autotest_common.sh@10 -- # set +x 00:30:03.442 ************************************ 00:30:03.442 START TEST spdkcli_nvmf_tcp 00:30:03.442 ************************************ 00:30:03.442 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:03.442 * Looking for test storage... 00:30:03.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2973928 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2973928 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2973928 ']' 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.443 [2024-07-15 11:45:37.627200] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:30:03.443 [2024-07-15 11:45:37.627266] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973928 ] 00:30:03.443 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.443 [2024-07-15 11:45:37.707857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:03.443 [2024-07-15 11:45:37.800069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.443 [2024-07-15 11:45:37.800075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.443 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.702 11:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:03.702 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:03.702 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:03.702 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:03.702 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:03.702 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:03.702 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:03.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:03.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:03.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:03.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:03.702 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:03.702 ' 00:30:06.232 [2024-07-15 11:45:40.622332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.605 [2024-07-15 11:45:41.943117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:10.191 [2024-07-15 11:45:44.399176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:12.093 [2024-07-15 11:45:46.486242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:13.995 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:13.995 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:13.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:13.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:13.995 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:13.995 11:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:14.253 11:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:14.253 11:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:14.253 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:14.253 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.253 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.510 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:14.510 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.510 11:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.510 11:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:14.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:14.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:14.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:14.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:14.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:14.510 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:14.510 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:14.510 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:14.510 ' 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:19.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:19.793 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:19.793 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:19.793 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2973928 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2973928 ']' 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2973928 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2973928 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2973928' 00:30:20.052 killing process with pid 2973928 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2973928 00:30:20.052 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2973928 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2973928 ']' 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2973928 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2973928 ']' 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2973928 00:30:20.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2973928) - No such process 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2973928 is not found' 00:30:20.311 Process with pid 2973928 is not found 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:20.311 00:30:20.311 real 0m17.099s 00:30:20.311 user 0m37.585s 00:30:20.311 sys 0m0.877s 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:20.311 11:45:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:20.311 ************************************ 00:30:20.311 END TEST spdkcli_nvmf_tcp 00:30:20.311 ************************************ 00:30:20.311 11:45:54 -- common/autotest_common.sh@1142 -- # return 0 00:30:20.311 11:45:54 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:20.311 11:45:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:20.311 11:45:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:20.311 11:45:54 -- common/autotest_common.sh@10 -- # set +x 00:30:20.311 ************************************ 00:30:20.311 START TEST nvmf_identify_passthru 00:30:20.311 ************************************ 00:30:20.311 11:45:54 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:20.311 * Looking for test storage... 00:30:20.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.311 11:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.311 11:45:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.311 11:45:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.311 11:45:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.311 11:45:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:20.311 11:45:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.311 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.311 11:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.311 11:45:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.312 11:45:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.312 11:45:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.312 11:45:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.312 11:45:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:20.312 11:45:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.312 11:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.312 11:45:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:20.312 11:45:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:20.312 11:45:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.312 11:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:26.875 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:26.875 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.875 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:26.876 Found net devices under 0000:af:00.0: cvl_0_0 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:26.876 Found net devices under 0000:af:00.1: cvl_0_1 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:30:26.876 00:30:26.876 --- 10.0.0.2 ping statistics --- 00:30:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.876 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:30:26.876 00:30:26.876 --- 10.0.0.1 ping statistics --- 00:30:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.876 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.876 11:46:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:30:26.876 11:46:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:86:00.0 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:26.876 11:46:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:26.876 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.069 11:46:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:30:31.069 11:46:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:31.069 11:46:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:31.069 11:46:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:31.069 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2981629 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.255 11:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2981629 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2981629 ']' 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.255 11:46:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.255 [2024-07-15 11:46:09.245954] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:30:35.255 [2024-07-15 11:46:09.246012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.255 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.255 [2024-07-15 11:46:09.332920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.255 [2024-07-15 11:46:09.423202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.255 [2024-07-15 11:46:09.423244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.255 [2024-07-15 11:46:09.423259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.255 [2024-07-15 11:46:09.423268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.255 [2024-07-15 11:46:09.423275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.255 [2024-07-15 11:46:09.423329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.255 [2024-07-15 11:46:09.423440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.255 [2024-07-15 11:46:09.423551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.255 [2024-07-15 11:46:09.423551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:35.823 11:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.823 INFO: Log level set to 20 00:30:35.823 INFO: Requests: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "method": "nvmf_set_config", 00:30:35.823 "id": 1, 00:30:35.823 "params": { 00:30:35.823 "admin_cmd_passthru": { 00:30:35.823 "identify_ctrlr": true 00:30:35.823 } 00:30:35.823 } 00:30:35.823 } 00:30:35.823 00:30:35.823 INFO: response: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "id": 1, 00:30:35.823 "result": true 00:30:35.823 } 00:30:35.823 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.823 11:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.823 INFO: Setting log level to 20 00:30:35.823 INFO: Setting log level to 20 00:30:35.823 INFO: Log level set to 20 00:30:35.823 INFO: Log level set to 20 00:30:35.823 INFO: Requests: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "method": "framework_start_init", 00:30:35.823 "id": 1 00:30:35.823 } 00:30:35.823 00:30:35.823 INFO: Requests: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "method": "framework_start_init", 00:30:35.823 "id": 1 00:30:35.823 } 00:30:35.823 00:30:35.823 [2024-07-15 11:46:10.216785] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:35.823 INFO: response: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "id": 1, 00:30:35.823 "result": true 00:30:35.823 } 00:30:35.823 00:30:35.823 INFO: response: 00:30:35.823 { 00:30:35.823 "jsonrpc": "2.0", 00:30:35.823 "id": 1, 00:30:35.823 "result": true 00:30:35.823 } 00:30:35.823 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.823 11:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.823 INFO: Setting log level to 40 00:30:35.823 INFO: Setting log level to 40 00:30:35.823 INFO: Setting log level to 40 00:30:35.823 [2024-07-15 11:46:10.230494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.823 11:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.823 11:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.823 11:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 Nvme0n1 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 [2024-07-15 11:46:13.165613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.108 [ 00:30:39.108 { 00:30:39.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:39.108 "subtype": "Discovery", 00:30:39.108 "listen_addresses": [], 00:30:39.108 "allow_any_host": true, 00:30:39.108 "hosts": [] 00:30:39.108 }, 00:30:39.108 { 00:30:39.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.108 "subtype": "NVMe", 00:30:39.108 "listen_addresses": [ 00:30:39.108 { 00:30:39.108 "trtype": "TCP", 00:30:39.108 "adrfam": "IPv4", 00:30:39.108 "traddr": "10.0.0.2", 00:30:39.108 "trsvcid": "4420" 00:30:39.108 } 00:30:39.108 ], 00:30:39.108 "allow_any_host": true, 00:30:39.108 "hosts": [], 00:30:39.108 "serial_number": "SPDK00000000000001", 00:30:39.108 "model_number": "SPDK bdev Controller", 00:30:39.108 "max_namespaces": 1, 00:30:39.108 "min_cntlid": 1, 00:30:39.108 "max_cntlid": 65519, 00:30:39.108 "namespaces": [ 00:30:39.108 { 00:30:39.108 "nsid": 1, 00:30:39.108 "bdev_name": "Nvme0n1", 00:30:39.108 "name": "Nvme0n1", 00:30:39.108 "nguid": "FA876A7B20BB454493C347E734E94750", 00:30:39.108 "uuid": "fa876a7b-20bb-4544-93c3-47e734e94750" 00:30:39.108 } 00:30:39.108 ] 00:30:39.108 } 00:30:39.108 ] 00:30:39.108 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:39.108 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:39.108 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:39.108 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:39.367 11:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.367 rmmod nvme_tcp 00:30:39.367 rmmod nvme_fabrics 00:30:39.367 rmmod nvme_keyring 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2981629 ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2981629 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2981629 ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2981629 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2981629 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2981629' 00:30:39.367 killing process with pid 2981629 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2981629 00:30:39.367 11:46:13 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2981629 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.269 11:46:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.269 11:46:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:41.269 11:46:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.172 11:46:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.172 00:30:43.172 real 0m22.824s 00:30:43.172 user 0m31.284s 00:30:43.172 sys 0m5.390s 00:30:43.172 11:46:17 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.172 11:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.172 ************************************ 00:30:43.172 END TEST nvmf_identify_passthru 00:30:43.172 ************************************ 00:30:43.172 11:46:17 -- common/autotest_common.sh@1142 -- # return 0 00:30:43.172 11:46:17 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:43.172 11:46:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:43.172 11:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.172 11:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:43.172 ************************************ 00:30:43.172 START TEST nvmf_dif 00:30:43.172 ************************************ 00:30:43.172 11:46:17 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:43.172 * Looking for test storage... 00:30:43.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.172 11:46:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.172 11:46:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.172 11:46:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.172 11:46:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.172 11:46:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.172 11:46:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.172 11:46:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.172 11:46:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:43.172 11:46:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.172 11:46:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.172 11:46:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:43.172 11:46:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:43.172 11:46:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:43.172 11:46:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:43.430 11:46:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.431 11:46:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:43.431 11:46:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:43.431 11:46:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:43.431 11:46:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.701 11:46:23 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.701 11:46:23 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:48.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:48.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:48.702 Found net devices under 0000:af:00.0: cvl_0_0 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:48.702 Found net devices under 0000:af:00.1: cvl_0_1 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.702 11:46:23 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:30:48.960 00:30:48.960 --- 10.0.0.2 ping statistics --- 00:30:48.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.960 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:30:48.960 00:30:48.960 --- 10.0.0.1 ping statistics --- 00:30:48.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.960 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:48.960 11:46:23 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:52.245 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:52.245 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:52.245 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:52.245 11:46:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:52.245 11:46:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:52.245 11:46:26 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:52.245 11:46:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.245 11:46:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2987448 00:30:52.246 11:46:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2987448 00:30:52.246 11:46:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2987448 ']' 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.246 11:46:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 [2024-07-15 11:46:26.266792] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:30:52.246 [2024-07-15 11:46:26.266846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.246 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.246 [2024-07-15 11:46:26.353160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.246 [2024-07-15 11:46:26.441695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.246 [2024-07-15 11:46:26.441738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.246 [2024-07-15 11:46:26.441747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.246 [2024-07-15 11:46:26.441756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.246 [2024-07-15 11:46:26.441763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.246 [2024-07-15 11:46:26.441786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:52.813 11:46:27 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 11:46:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.813 11:46:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:52.813 11:46:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 [2024-07-15 11:46:27.178561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.813 11:46:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 ************************************ 00:30:52.813 START TEST fio_dif_1_default 00:30:52.813 ************************************ 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 bdev_null0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.813 [2024-07-15 11:46:27.246860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:52.813 { 00:30:52.813 "params": { 00:30:52.813 "name": "Nvme$subsystem", 00:30:52.813 "trtype": "$TEST_TRANSPORT", 00:30:52.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.813 "adrfam": "ipv4", 00:30:52.813 "trsvcid": "$NVMF_PORT", 00:30:52.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.813 "hdgst": ${hdgst:-false}, 00:30:52.813 "ddgst": ${ddgst:-false} 00:30:52.813 }, 00:30:52.813 "method": "bdev_nvme_attach_controller" 00:30:52.813 } 00:30:52.813 EOF 00:30:52.813 )") 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:52.813 11:46:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:52.813 "params": { 00:30:52.813 "name": "Nvme0", 00:30:52.813 "trtype": "tcp", 00:30:52.813 "traddr": "10.0.0.2", 00:30:52.813 "adrfam": "ipv4", 00:30:52.813 "trsvcid": "4420", 00:30:52.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.813 "hdgst": false, 00:30:52.813 "ddgst": false 00:30:52.813 }, 00:30:52.813 "method": "bdev_nvme_attach_controller" 00:30:52.813 }' 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.099 11:46:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.410 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.410 fio-3.35 00:30:53.410 Starting 1 thread 00:30:53.410 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.620 00:31:05.620 filename0: (groupid=0, jobs=1): err= 0: pid=2987877: Mon Jul 15 11:46:38 2024 00:31:05.620 read: IOPS=96, BW=386KiB/s (395kB/s)(3856KiB/10002msec) 00:31:05.620 slat (nsec): min=9884, max=60371, avg=20795.66, stdev=2342.15 00:31:05.620 clat (usec): min=40762, max=46822, avg=41444.01, stdev=608.01 00:31:05.620 lat (usec): min=40782, max=46847, avg=41464.81, stdev=607.85 00:31:05.620 clat percentiles (usec): 00:31:05.620 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:05.621 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:31:05.621 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:05.621 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:31:05.621 | 99.99th=[46924] 00:31:05.621 bw ( KiB/s): min= 352, max= 416, per=99.61%, avg=384.00, stdev=14.68, samples=20 00:31:05.621 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:31:05.621 lat (msec) : 50=100.00% 00:31:05.621 cpu : usr=94.07%, sys=5.39%, ctx=10, majf=0, minf=221 00:31:05.621 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.621 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.621 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.621 00:31:05.621 Run status group 0 (all jobs): 00:31:05.621 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10002-10002msec 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 00:31:05.621 real 0m11.252s 00:31:05.621 user 0m20.892s 00:31:05.621 sys 0m0.867s 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 ************************************ 00:31:05.621 END TEST fio_dif_1_default 00:31:05.621 ************************************ 00:31:05.621 11:46:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:05.621 11:46:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:05.621 11:46:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:05.621 11:46:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 ************************************ 00:31:05.621 START TEST fio_dif_1_multi_subsystems 00:31:05.621 ************************************ 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 bdev_null0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 [2024-07-15 11:46:38.576615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 bdev_null1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.621 { 00:31:05.621 "params": { 00:31:05.621 "name": "Nvme$subsystem", 00:31:05.621 "trtype": "$TEST_TRANSPORT", 00:31:05.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.621 "adrfam": "ipv4", 00:31:05.621 "trsvcid": "$NVMF_PORT", 00:31:05.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.621 "hdgst": ${hdgst:-false}, 00:31:05.621 "ddgst": ${ddgst:-false} 00:31:05.621 }, 00:31:05.621 "method": "bdev_nvme_attach_controller" 00:31:05.621 } 00:31:05.621 EOF 00:31:05.621 )") 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.621 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.621 { 00:31:05.621 "params": { 00:31:05.621 "name": "Nvme$subsystem", 00:31:05.621 "trtype": "$TEST_TRANSPORT", 00:31:05.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.621 "adrfam": "ipv4", 00:31:05.621 "trsvcid": "$NVMF_PORT", 00:31:05.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.621 "hdgst": ${hdgst:-false}, 00:31:05.621 "ddgst": ${ddgst:-false} 00:31:05.622 }, 00:31:05.622 "method": "bdev_nvme_attach_controller" 00:31:05.622 } 00:31:05.622 EOF 00:31:05.622 )") 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:05.622 "params": { 00:31:05.622 "name": "Nvme0", 00:31:05.622 "trtype": "tcp", 00:31:05.622 "traddr": "10.0.0.2", 00:31:05.622 "adrfam": "ipv4", 00:31:05.622 "trsvcid": "4420", 00:31:05.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.622 "hdgst": false, 00:31:05.622 "ddgst": false 00:31:05.622 }, 00:31:05.622 "method": "bdev_nvme_attach_controller" 00:31:05.622 },{ 00:31:05.622 "params": { 00:31:05.622 "name": "Nvme1", 00:31:05.622 "trtype": "tcp", 00:31:05.622 "traddr": "10.0.0.2", 00:31:05.622 "adrfam": "ipv4", 00:31:05.622 "trsvcid": "4420", 00:31:05.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.622 "hdgst": false, 00:31:05.622 "ddgst": false 00:31:05.622 }, 00:31:05.622 "method": "bdev_nvme_attach_controller" 00:31:05.622 }' 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:05.622 11:46:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.622 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:05.622 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:05.622 fio-3.35 00:31:05.622 Starting 2 threads 00:31:05.622 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.662 00:31:15.662 filename0: (groupid=0, jobs=1): err= 0: pid=2990116: Mon Jul 15 11:46:50 2024 00:31:15.662 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10037msec) 00:31:15.662 slat (nsec): min=10680, max=61574, avg=22212.42, stdev=3614.44 00:31:15.662 clat (usec): min=659, max=42963, avg=21156.14, stdev=20267.40 00:31:15.662 lat (usec): min=680, max=42993, avg=21178.35, stdev=20266.34 00:31:15.662 clat percentiles (usec): 00:31:15.662 | 1.00th=[ 676], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 734], 00:31:15.662 | 30.00th=[ 758], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:31:15.662 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:15.662 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:15.662 | 99.99th=[42730] 00:31:15.662 bw ( KiB/s): min= 704, max= 768, per=66.24%, avg=755.20, stdev=26.27, samples=20 00:31:15.662 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:31:15.662 lat (usec) : 750=26.37%, 1000=22.62% 00:31:15.662 lat (msec) : 2=0.69%, 50=50.32% 00:31:15.662 cpu : usr=96.61%, sys=2.84%, ctx=10, majf=0, minf=179 00:31:15.662 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.662 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.662 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:15.662 filename1: (groupid=0, jobs=1): err= 0: pid=2990117: Mon Jul 15 11:46:50 2024 00:31:15.662 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10034msec) 00:31:15.662 slat (nsec): min=9826, max=46685, avg=23205.12, stdev=4378.36 00:31:15.662 clat (usec): min=40757, max=42881, avg=41396.54, stdev=496.96 00:31:15.662 lat (usec): min=40777, max=42910, avg=41419.75, stdev=497.33 00:31:15.662 clat percentiles (usec): 00:31:15.662 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:15.662 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:31:15.662 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:15.662 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:15.662 | 99.99th=[42730] 00:31:15.662 bw ( KiB/s): min= 384, max= 416, per=33.78%, avg=385.60, stdev= 7.16, samples=20 00:31:15.662 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:31:15.662 lat (msec) : 50=100.00% 00:31:15.662 cpu : usr=96.13%, sys=3.34%, ctx=15, majf=0, minf=35 00:31:15.662 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.662 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.662 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:15.662 00:31:15.662 Run status group 0 (all jobs): 00:31:15.662 READ: bw=1140KiB/s (1167kB/s), 386KiB/s-754KiB/s (395kB/s-772kB/s), io=11.2MiB (11.7MB), run=10034-10037msec 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.978 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 00:31:15.979 real 0m11.752s 00:31:15.979 user 0m32.016s 00:31:15.979 sys 0m0.983s 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 ************************************ 00:31:15.979 END TEST fio_dif_1_multi_subsystems 00:31:15.979 ************************************ 00:31:15.979 11:46:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:15.979 11:46:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:15.979 11:46:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:15.979 11:46:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 ************************************ 00:31:15.979 START TEST fio_dif_rand_params 00:31:15.979 ************************************ 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 bdev_null0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.979 [2024-07-15 11:46:50.396594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:15.979 { 00:31:15.979 "params": { 00:31:15.979 "name": "Nvme$subsystem", 00:31:15.979 "trtype": "$TEST_TRANSPORT", 00:31:15.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.979 "adrfam": "ipv4", 00:31:15.979 "trsvcid": "$NVMF_PORT", 00:31:15.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.979 "hdgst": ${hdgst:-false}, 00:31:15.979 "ddgst": ${ddgst:-false} 00:31:15.979 }, 00:31:15.979 "method": "bdev_nvme_attach_controller" 00:31:15.979 } 00:31:15.979 EOF 00:31:15.979 )") 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:15.979 11:46:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:15.979 "params": { 00:31:15.979 "name": "Nvme0", 00:31:15.979 "trtype": "tcp", 00:31:15.979 "traddr": "10.0.0.2", 00:31:15.979 "adrfam": "ipv4", 00:31:15.979 "trsvcid": "4420", 00:31:15.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:15.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:15.979 "hdgst": false, 00:31:15.979 "ddgst": false 00:31:15.979 }, 00:31:15.979 "method": "bdev_nvme_attach_controller" 00:31:15.979 }' 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.291 11:46:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.555 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:16.555 ... 00:31:16.555 fio-3.35 00:31:16.555 Starting 3 threads 00:31:16.555 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.121 00:31:23.121 filename0: (groupid=0, jobs=1): err= 0: pid=2992123: Mon Jul 15 11:46:56 2024 00:31:23.121 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(136MiB/5006msec) 00:31:23.121 slat (nsec): min=10209, max=46588, avg=29664.44, stdev=3437.65 00:31:23.121 clat (usec): min=5451, max=51843, avg=13791.68, stdev=5573.58 00:31:23.121 lat (usec): min=5474, max=51875, avg=13821.35, stdev=5573.17 00:31:23.121 clat percentiles (usec): 00:31:23.121 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11731], 00:31:23.121 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13304], 60.00th=[13829], 00:31:23.121 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[16319], 00:31:23.121 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:31:23.121 | 99.99th=[51643] 00:31:23.121 bw ( KiB/s): min=17920, max=32000, per=34.88%, avg=27724.80, stdev=3723.60, samples=10 00:31:23.121 iops : min= 140, max= 250, avg=216.60, stdev=29.09, samples=10 00:31:23.121 lat (msec) : 10=8.20%, 20=89.59%, 50=1.66%, 100=0.55% 00:31:23.121 cpu : usr=93.83%, sys=5.53%, ctx=8, majf=0, minf=56 00:31:23.121 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.121 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.121 filename0: (groupid=0, jobs=1): err= 0: pid=2992124: Mon Jul 15 11:46:56 2024 00:31:23.121 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(119MiB/5004msec) 00:31:23.121 slat (usec): min=9, max=100, avg=29.58, stdev= 4.32 00:31:23.121 clat (usec): min=4613, max=54673, avg=15696.71, stdev=7479.80 00:31:23.121 lat (usec): min=4635, max=54705, avg=15726.29, stdev=7479.00 00:31:23.121 clat percentiles (usec): 00:31:23.121 | 1.00th=[ 6783], 5.00th=[10814], 10.00th=[11600], 20.00th=[12649], 00:31:23.121 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14615], 60.00th=[15139], 00:31:23.121 | 70.00th=[15533], 80.00th=[16188], 90.00th=[16909], 95.00th=[17957], 00:31:23.121 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:31:23.121 | 99.99th=[54789] 00:31:23.121 bw ( KiB/s): min=13824, max=28416, per=30.63%, avg=24345.60, stdev=4594.47, samples=10 00:31:23.121 iops : min= 108, max= 222, avg=190.20, stdev=35.89, samples=10 00:31:23.121 lat (msec) : 10=3.56%, 20=92.35%, 50=1.89%, 100=2.20% 00:31:23.121 cpu : usr=94.18%, sys=5.20%, ctx=9, majf=0, minf=119 00:31:23.121 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.121 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.121 filename0: (groupid=0, jobs=1): err= 0: pid=2992125: Mon Jul 15 11:46:56 2024 00:31:23.121 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(137MiB/5044msec) 00:31:23.121 slat (nsec): min=10091, max=55140, avg=29196.58, stdev=3535.75 00:31:23.121 clat (usec): min=4597, max=55809, avg=13783.89, stdev=4951.36 00:31:23.121 lat (usec): min=4620, max=55836, avg=13813.09, stdev=4951.61 00:31:23.121 clat percentiles (usec): 00:31:23.121 | 1.00th=[ 5538], 5.00th=[ 7898], 10.00th=[ 9634], 20.00th=[11731], 00:31:23.121 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13829], 60.00th=[14353], 00:31:23.121 | 70.00th=[14877], 80.00th=[15533], 90.00th=[16188], 95.00th=[16712], 00:31:23.121 | 99.00th=[46924], 99.50th=[53740], 99.90th=[54789], 99.95th=[55837], 00:31:23.121 | 99.99th=[55837] 00:31:23.121 bw ( KiB/s): min=24576, max=33603, per=35.12%, avg=27910.70, stdev=2516.36, samples=10 00:31:23.121 iops : min= 192, max= 262, avg=218.00, stdev=19.53, samples=10 00:31:23.121 lat (msec) : 10=12.55%, 20=86.17%, 50=0.64%, 100=0.64% 00:31:23.121 cpu : usr=94.15%, sys=5.23%, ctx=13, majf=0, minf=126 00:31:23.121 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.121 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.121 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.121 00:31:23.121 Run status group 0 (all jobs): 00:31:23.121 READ: bw=77.6MiB/s (81.4MB/s), 23.8MiB/s-27.1MiB/s (25.0MB/s-28.4MB/s), io=392MiB (411MB), run=5004-5044msec 00:31:23.121 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:23.121 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:23.121 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.121 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:23.121 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 bdev_null0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 [2024-07-15 11:46:56.854190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 bdev_null1 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 bdev_null2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.122 { 00:31:23.122 "params": { 00:31:23.122 "name": "Nvme$subsystem", 00:31:23.122 "trtype": "$TEST_TRANSPORT", 00:31:23.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.122 "adrfam": "ipv4", 00:31:23.122 "trsvcid": "$NVMF_PORT", 00:31:23.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.122 "hdgst": ${hdgst:-false}, 00:31:23.122 "ddgst": ${ddgst:-false} 00:31:23.122 }, 00:31:23.122 "method": "bdev_nvme_attach_controller" 00:31:23.122 } 00:31:23.122 EOF 00:31:23.122 )") 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.122 { 00:31:23.122 "params": { 00:31:23.122 "name": "Nvme$subsystem", 00:31:23.122 "trtype": "$TEST_TRANSPORT", 00:31:23.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.122 "adrfam": "ipv4", 00:31:23.122 "trsvcid": "$NVMF_PORT", 00:31:23.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.122 "hdgst": ${hdgst:-false}, 00:31:23.122 "ddgst": ${ddgst:-false} 00:31:23.122 }, 00:31:23.122 "method": "bdev_nvme_attach_controller" 00:31:23.122 } 00:31:23.122 EOF 00:31:23.122 )") 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.122 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.122 { 00:31:23.122 "params": { 00:31:23.122 "name": "Nvme$subsystem", 00:31:23.123 "trtype": "$TEST_TRANSPORT", 00:31:23.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.123 "adrfam": "ipv4", 00:31:23.123 "trsvcid": "$NVMF_PORT", 00:31:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.123 "hdgst": ${hdgst:-false}, 00:31:23.123 "ddgst": ${ddgst:-false} 00:31:23.123 }, 00:31:23.123 "method": "bdev_nvme_attach_controller" 00:31:23.123 } 00:31:23.123 EOF 00:31:23.123 )") 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:23.123 "params": { 00:31:23.123 "name": "Nvme0", 00:31:23.123 "trtype": "tcp", 00:31:23.123 "traddr": "10.0.0.2", 00:31:23.123 "adrfam": "ipv4", 00:31:23.123 "trsvcid": "4420", 00:31:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.123 "hdgst": false, 00:31:23.123 "ddgst": false 00:31:23.123 }, 00:31:23.123 "method": "bdev_nvme_attach_controller" 00:31:23.123 },{ 00:31:23.123 "params": { 00:31:23.123 "name": "Nvme1", 00:31:23.123 "trtype": "tcp", 00:31:23.123 "traddr": "10.0.0.2", 00:31:23.123 "adrfam": "ipv4", 00:31:23.123 "trsvcid": "4420", 00:31:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.123 "hdgst": false, 00:31:23.123 "ddgst": false 00:31:23.123 }, 00:31:23.123 "method": "bdev_nvme_attach_controller" 00:31:23.123 },{ 00:31:23.123 "params": { 00:31:23.123 "name": "Nvme2", 00:31:23.123 "trtype": "tcp", 00:31:23.123 "traddr": "10.0.0.2", 00:31:23.123 "adrfam": "ipv4", 00:31:23.123 "trsvcid": "4420", 00:31:23.123 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:23.123 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:23.123 "hdgst": false, 00:31:23.123 "ddgst": false 00:31:23.123 }, 00:31:23.123 "method": "bdev_nvme_attach_controller" 00:31:23.123 }' 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:23.123 11:46:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.123 11:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.123 11:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.123 11:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:23.123 11:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.123 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.123 ... 00:31:23.123 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.123 ... 00:31:23.123 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.123 ... 00:31:23.123 fio-3.35 00:31:23.123 Starting 24 threads 00:31:23.123 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.332 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993548: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=425, BW=1700KiB/s (1741kB/s)(16.6MiB/10012msec) 00:31:35.332 slat (usec): min=7, max=104, avg=46.59, stdev=18.31 00:31:35.332 clat (usec): min=13311, max=39395, avg=37210.89, stdev=2104.28 00:31:35.332 lat (usec): min=13319, max=39446, avg=37257.48, stdev=2107.71 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[24511], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.332 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:31:35.332 | 99.99th=[39584] 00:31:35.332 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1696.00, stdev=56.87, samples=20 00:31:35.332 iops : min= 416, max= 448, avg=424.00, stdev=14.22, samples=20 00:31:35.332 lat (msec) : 20=0.75%, 50=99.25% 00:31:35.332 cpu : usr=98.40%, sys=1.19%, ctx=21, majf=0, minf=68 00:31:35.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993549: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10005msec) 00:31:35.332 slat (nsec): min=9447, max=52382, avg=22261.61, stdev=8744.61 00:31:35.332 clat (usec): min=23412, max=51460, avg=37718.85, stdev=811.16 00:31:35.332 lat (usec): min=23423, max=51482, avg=37741.11, stdev=810.25 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[39060], 99.50th=[39584], 99.90th=[46924], 99.95th=[46924], 00:31:35.332 | 99.99th=[51643] 00:31:35.332 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1684.21, stdev=47.95, samples=19 00:31:35.332 iops : min= 416, max= 448, avg=421.05, stdev=11.99, samples=19 00:31:35.332 lat (msec) : 50=99.95%, 100=0.05% 00:31:35.332 cpu : usr=98.51%, sys=1.07%, ctx=20, majf=0, minf=78 00:31:35.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993550: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:31:35.332 slat (nsec): min=7133, max=56498, avg=24241.87, stdev=7479.51 00:31:35.332 clat (usec): min=30410, max=58704, avg=37724.34, stdev=1389.34 00:31:35.332 lat (usec): min=30437, max=58730, avg=37748.58, stdev=1388.52 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[38536], 99.50th=[39060], 99.90th=[58459], 99.95th=[58459], 00:31:35.332 | 99.99th=[58459] 00:31:35.332 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.332 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.332 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.332 cpu : usr=98.57%, sys=1.02%, ctx=25, majf=0, minf=84 00:31:35.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993551: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=429, BW=1717KiB/s (1759kB/s)(16.8MiB/10006msec) 00:31:35.332 slat (nsec): min=5753, max=93331, avg=24717.29, stdev=9369.01 00:31:35.332 clat (usec): min=5696, max=85078, avg=37047.36, stdev=4159.59 00:31:35.332 lat (usec): min=5706, max=85095, avg=37072.07, stdev=4160.63 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[23200], 5.00th=[31065], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:35.332 | 99.00th=[45876], 99.50th=[52691], 99.90th=[68682], 99.95th=[68682], 00:31:35.332 | 99.99th=[85459] 00:31:35.332 bw ( KiB/s): min= 1552, max= 1904, per=4.20%, avg=1707.79, stdev=86.96, samples=19 00:31:35.332 iops : min= 388, max= 476, avg=426.95, stdev=21.74, samples=19 00:31:35.332 lat (msec) : 10=0.37%, 50=98.79%, 100=0.84% 00:31:35.332 cpu : usr=98.80%, sys=0.80%, ctx=14, majf=0, minf=65 00:31:35.332 IO depths : 1=5.2%, 2=10.6%, 4=21.9%, 8=54.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993552: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:31:35.332 slat (nsec): min=6399, max=57134, avg=26389.86, stdev=7967.28 00:31:35.332 clat (usec): min=22940, max=62801, avg=37694.04, stdev=1813.40 00:31:35.332 lat (usec): min=22970, max=62819, avg=37720.43, stdev=1812.44 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[39060], 99.50th=[39060], 99.90th=[62653], 99.95th=[62653], 00:31:35.332 | 99.99th=[62653] 00:31:35.332 bw ( KiB/s): min= 1539, max= 1792, per=4.14%, avg=1682.80, stdev=62.47, samples=20 00:31:35.332 iops : min= 384, max= 448, avg=420.65, stdev=15.72, samples=20 00:31:35.332 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.332 cpu : usr=98.85%, sys=0.74%, ctx=14, majf=0, minf=49 00:31:35.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993553: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10009msec) 00:31:35.332 slat (nsec): min=6559, max=53147, avg=26064.93, stdev=7236.98 00:31:35.332 clat (usec): min=22834, max=60973, avg=37675.65, stdev=1721.47 00:31:35.332 lat (usec): min=22861, max=60992, avg=37701.71, stdev=1720.59 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[39060], 99.50th=[39060], 99.90th=[61080], 99.95th=[61080], 00:31:35.332 | 99.99th=[61080] 00:31:35.332 bw ( KiB/s): min= 1539, max= 1792, per=4.14%, avg=1683.15, stdev=62.34, samples=20 00:31:35.332 iops : min= 384, max= 448, avg=420.75, stdev=15.68, samples=20 00:31:35.332 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.332 cpu : usr=98.85%, sys=0.73%, ctx=22, majf=0, minf=46 00:31:35.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.332 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.332 filename0: (groupid=0, jobs=1): err= 0: pid=2993554: Mon Jul 15 11:47:08 2024 00:31:35.332 read: IOPS=421, BW=1687KiB/s (1728kB/s)(16.5MiB/10013msec) 00:31:35.332 slat (nsec): min=10538, max=58203, avg=26845.10, stdev=7303.78 00:31:35.332 clat (usec): min=28124, max=74070, avg=37695.90, stdev=1566.86 00:31:35.332 lat (usec): min=28136, max=74084, avg=37722.75, stdev=1566.16 00:31:35.332 clat percentiles (usec): 00:31:35.332 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.332 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.332 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.332 | 99.00th=[38536], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:31:35.332 | 99.99th=[73925] 00:31:35.332 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.332 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.332 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.332 cpu : usr=98.91%, sys=0.68%, ctx=9, majf=0, minf=50 00:31:35.332 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename0: (groupid=0, jobs=1): err= 0: pid=2993555: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.5MiB/10023msec) 00:31:35.333 slat (usec): min=9, max=102, avg=44.28, stdev=17.65 00:31:35.333 clat (usec): min=29159, max=68770, avg=37541.32, stdev=2031.87 00:31:35.333 lat (usec): min=29180, max=68784, avg=37585.59, stdev=2031.22 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.333 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.333 | 99.00th=[39060], 99.50th=[39060], 99.90th=[68682], 99.95th=[68682], 00:31:35.333 | 99.99th=[68682] 00:31:35.333 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.333 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.333 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.333 cpu : usr=98.80%, sys=0.79%, ctx=10, majf=0, minf=45 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993556: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=425, BW=1701KiB/s (1741kB/s)(16.6MiB/10011msec) 00:31:35.333 slat (usec): min=6, max=103, avg=46.95, stdev=18.38 00:31:35.333 clat (usec): min=12119, max=47900, avg=37196.42, stdev=2169.63 00:31:35.333 lat (usec): min=12126, max=47926, avg=37243.36, stdev=2173.47 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[24511], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.333 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.333 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:31:35.333 | 99.99th=[47973] 00:31:35.333 bw ( KiB/s): min= 1664, max= 1795, per=4.18%, avg=1696.15, stdev=57.14, samples=20 00:31:35.333 iops : min= 416, max= 448, avg=424.00, stdev=14.22, samples=20 00:31:35.333 lat (msec) : 20=0.70%, 50=99.30% 00:31:35.333 cpu : usr=98.62%, sys=0.96%, ctx=26, majf=0, minf=64 00:31:35.333 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993557: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10006msec) 00:31:35.333 slat (nsec): min=5026, max=53753, avg=25771.57, stdev=8035.82 00:31:35.333 clat (usec): min=22791, max=58853, avg=37657.93, stdev=1617.22 00:31:35.333 lat (usec): min=22802, max=58867, avg=37683.70, stdev=1616.52 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.333 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:31:35.333 | 99.00th=[39060], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:31:35.333 | 99.99th=[58983] 00:31:35.333 bw ( KiB/s): min= 1532, max= 1792, per=4.15%, avg=1684.00, stdev=64.70, samples=19 00:31:35.333 iops : min= 383, max= 448, avg=421.00, stdev=16.18, samples=19 00:31:35.333 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.333 cpu : usr=98.76%, sys=0.83%, ctx=14, majf=0, minf=49 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993558: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=437, BW=1749KiB/s (1791kB/s)(17.1MiB/10006msec) 00:31:35.333 slat (nsec): min=6141, max=90224, avg=16342.80, stdev=8226.02 00:31:35.333 clat (usec): min=8089, max=84277, avg=36517.62, stdev=5203.83 00:31:35.333 lat (usec): min=8099, max=84299, avg=36533.97, stdev=5203.38 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[18744], 5.00th=[26608], 10.00th=[30802], 20.00th=[32637], 00:31:35.333 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[43779], 00:31:35.333 | 99.00th=[45351], 99.50th=[52691], 99.90th=[68682], 99.95th=[68682], 00:31:35.333 | 99.99th=[84411] 00:31:35.333 bw ( KiB/s): min= 1552, max= 1840, per=4.29%, avg=1741.47, stdev=67.09, samples=19 00:31:35.333 iops : min= 388, max= 460, avg=435.37, stdev=16.77, samples=19 00:31:35.333 lat (msec) : 10=0.14%, 20=2.42%, 50=96.80%, 100=0.64% 00:31:35.333 cpu : usr=98.65%, sys=0.92%, ctx=12, majf=0, minf=61 00:31:35.333 IO depths : 1=0.1%, 2=0.1%, 4=1.6%, 8=80.9%, 16=17.3%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=89.3%, 8=9.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993559: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:31:35.333 slat (nsec): min=9811, max=54390, avg=20795.50, stdev=7516.61 00:31:35.333 clat (usec): min=27966, max=58779, avg=37758.01, stdev=1421.26 00:31:35.333 lat (usec): min=27979, max=58797, avg=37778.80, stdev=1420.61 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.333 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.333 | 99.00th=[38536], 99.50th=[39060], 99.90th=[58459], 99.95th=[58983], 00:31:35.333 | 99.99th=[58983] 00:31:35.333 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.333 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.333 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.333 cpu : usr=98.27%, sys=1.31%, ctx=17, majf=0, minf=75 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993560: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:31:35.333 slat (nsec): min=9500, max=56775, avg=27062.07, stdev=7657.89 00:31:35.333 clat (usec): min=30362, max=58873, avg=37695.00, stdev=1401.26 00:31:35.333 lat (usec): min=30379, max=58888, avg=37722.07, stdev=1400.38 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.333 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:31:35.333 | 99.00th=[38536], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:31:35.333 | 99.99th=[58983] 00:31:35.333 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.333 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.333 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.333 cpu : usr=98.70%, sys=0.89%, ctx=10, majf=0, minf=73 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993561: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10006msec) 00:31:35.333 slat (usec): min=9, max=108, avg=41.66, stdev=17.33 00:31:35.333 clat (usec): min=10152, max=90204, avg=37489.01, stdev=3195.66 00:31:35.333 lat (usec): min=10163, max=90222, avg=37530.67, stdev=3196.02 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.333 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.333 | 99.00th=[39060], 99.50th=[39060], 99.90th=[79168], 99.95th=[79168], 00:31:35.333 | 99.99th=[90702] 00:31:35.333 bw ( KiB/s): min= 1539, max= 1792, per=4.13%, avg=1677.63, stdev=58.33, samples=19 00:31:35.333 iops : min= 384, max= 448, avg=419.37, stdev=14.68, samples=19 00:31:35.333 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:35.333 cpu : usr=98.94%, sys=0.64%, ctx=14, majf=0, minf=70 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.333 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.333 filename1: (groupid=0, jobs=1): err= 0: pid=2993562: Mon Jul 15 11:47:08 2024 00:31:35.333 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:31:35.333 slat (nsec): min=6160, max=55257, avg=26830.11, stdev=7855.71 00:31:35.333 clat (usec): min=22789, max=64732, avg=37684.82, stdev=1919.69 00:31:35.333 lat (usec): min=22807, max=64761, avg=37711.65, stdev=1918.74 00:31:35.333 clat percentiles (usec): 00:31:35.333 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.333 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.333 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.333 | 99.00th=[39060], 99.50th=[39060], 99.90th=[64750], 99.95th=[64750], 00:31:35.333 | 99.99th=[64750] 00:31:35.333 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1682.65, stdev=62.84, samples=20 00:31:35.333 iops : min= 384, max= 448, avg=420.65, stdev=15.72, samples=20 00:31:35.333 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.333 cpu : usr=98.71%, sys=0.88%, ctx=11, majf=0, minf=68 00:31:35.333 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.333 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename1: (groupid=0, jobs=1): err= 0: pid=2993563: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=421, BW=1687KiB/s (1728kB/s)(16.5MiB/10003msec) 00:31:35.334 slat (nsec): min=9384, max=93283, avg=26762.32, stdev=8945.41 00:31:35.334 clat (usec): min=18299, max=99584, avg=37687.23, stdev=3220.87 00:31:35.334 lat (usec): min=18313, max=99602, avg=37713.99, stdev=3220.34 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[30802], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.334 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:31:35.334 | 99.00th=[39060], 99.50th=[56361], 99.90th=[79168], 99.95th=[79168], 00:31:35.334 | 99.99th=[99091] 00:31:35.334 bw ( KiB/s): min= 1507, max= 1792, per=4.14%, avg=1682.68, stdev=68.13, samples=19 00:31:35.334 iops : min= 376, max= 448, avg=420.63, stdev=17.14, samples=19 00:31:35.334 lat (msec) : 20=0.38%, 50=99.05%, 100=0.57% 00:31:35.334 cpu : usr=98.92%, sys=0.66%, ctx=11, majf=0, minf=58 00:31:35.334 IO depths : 1=6.0%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993564: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=424, BW=1700KiB/s (1740kB/s)(16.6MiB/10017msec) 00:31:35.334 slat (usec): min=5, max=126, avg=48.76, stdev=19.72 00:31:35.334 clat (usec): min=14444, max=39402, avg=37228.08, stdev=1967.29 00:31:35.334 lat (usec): min=14456, max=39455, avg=37276.84, stdev=1970.80 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[29230], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.334 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.334 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39584], 00:31:35.334 | 99.99th=[39584] 00:31:35.334 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1696.00, stdev=56.87, samples=20 00:31:35.334 iops : min= 416, max= 448, avg=424.00, stdev=14.22, samples=20 00:31:35.334 lat (msec) : 20=0.75%, 50=99.25% 00:31:35.334 cpu : usr=98.76%, sys=0.79%, ctx=12, majf=0, minf=65 00:31:35.334 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993565: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.5MiB/10023msec) 00:31:35.334 slat (usec): min=11, max=156, avg=49.99, stdev=20.84 00:31:35.334 clat (usec): min=26483, max=68779, avg=37457.18, stdev=2063.38 00:31:35.334 lat (usec): min=26510, max=68822, avg=37507.18, stdev=2063.46 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.334 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.334 | 99.00th=[39060], 99.50th=[39060], 99.90th=[68682], 99.95th=[68682], 00:31:35.334 | 99.99th=[68682] 00:31:35.334 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.334 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.334 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.334 cpu : usr=98.08%, sys=1.24%, ctx=9, majf=0, minf=64 00:31:35.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993566: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10007msec) 00:31:35.334 slat (nsec): min=10027, max=74522, avg=30106.11, stdev=8912.14 00:31:35.334 clat (usec): min=18112, max=72518, avg=37604.22, stdev=2507.46 00:31:35.334 lat (usec): min=18133, max=72543, avg=37634.32, stdev=2507.04 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.334 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:31:35.334 | 99.00th=[38536], 99.50th=[39060], 99.90th=[72877], 99.95th=[72877], 00:31:35.334 | 99.99th=[72877] 00:31:35.334 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1684.21, stdev=64.19, samples=19 00:31:35.334 iops : min= 384, max= 448, avg=421.05, stdev=16.05, samples=19 00:31:35.334 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:35.334 cpu : usr=98.52%, sys=0.97%, ctx=10, majf=0, minf=61 00:31:35.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993567: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=424, BW=1700KiB/s (1740kB/s)(16.6MiB/10017msec) 00:31:35.334 slat (nsec): min=9660, max=68867, avg=31891.93, stdev=12543.93 00:31:35.334 clat (usec): min=10142, max=39480, avg=37400.80, stdev=2072.84 00:31:35.334 lat (usec): min=10156, max=39508, avg=37432.69, stdev=2073.72 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[29754], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:31:35.334 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:35.334 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:31:35.334 | 99.99th=[39584] 00:31:35.334 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1696.00, stdev=56.87, samples=20 00:31:35.334 iops : min= 416, max= 448, avg=424.00, stdev=14.22, samples=20 00:31:35.334 lat (msec) : 20=0.70%, 50=99.30% 00:31:35.334 cpu : usr=98.85%, sys=0.78%, ctx=13, majf=0, minf=75 00:31:35.334 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993568: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10010msec) 00:31:35.334 slat (usec): min=9, max=157, avg=49.92, stdev=21.03 00:31:35.334 clat (usec): min=10837, max=39436, avg=37124.77, stdev=2205.09 00:31:35.334 lat (usec): min=10846, max=39483, avg=37174.69, stdev=2209.59 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[24511], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.334 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.334 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:31:35.334 | 99.99th=[39584] 00:31:35.334 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1696.00, stdev=56.87, samples=20 00:31:35.334 iops : min= 416, max= 448, avg=424.00, stdev=14.22, samples=20 00:31:35.334 lat (msec) : 20=0.75%, 50=99.25% 00:31:35.334 cpu : usr=98.20%, sys=1.28%, ctx=14, majf=0, minf=40 00:31:35.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993569: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:31:35.334 slat (nsec): min=6108, max=53903, avg=26468.18, stdev=8029.95 00:31:35.334 clat (usec): min=22520, max=78200, avg=37691.76, stdev=1962.57 00:31:35.334 lat (usec): min=22536, max=78217, avg=37718.22, stdev=1961.64 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.334 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.334 | 99.00th=[39060], 99.50th=[39060], 99.90th=[62653], 99.95th=[62653], 00:31:35.334 | 99.99th=[78119] 00:31:35.334 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.00, stdev=62.71, samples=20 00:31:35.334 iops : min= 384, max= 448, avg=420.75, stdev=15.68, samples=20 00:31:35.334 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.334 cpu : usr=98.75%, sys=0.84%, ctx=14, majf=0, minf=49 00:31:35.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993570: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.5MiB/10023msec) 00:31:35.334 slat (usec): min=9, max=102, avg=44.30, stdev=17.84 00:31:35.334 clat (usec): min=29155, max=68788, avg=37536.13, stdev=2031.58 00:31:35.334 lat (usec): min=29178, max=68803, avg=37580.43, stdev=2031.06 00:31:35.334 clat percentiles (usec): 00:31:35.334 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:31:35.334 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.334 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:31:35.334 | 99.00th=[39060], 99.50th=[39060], 99.90th=[68682], 99.95th=[68682], 00:31:35.334 | 99.99th=[68682] 00:31:35.334 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.334 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.334 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.334 cpu : usr=98.63%, sys=0.96%, ctx=15, majf=0, minf=70 00:31:35.334 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.334 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.334 filename2: (groupid=0, jobs=1): err= 0: pid=2993571: Mon Jul 15 11:47:08 2024 00:31:35.334 read: IOPS=421, BW=1687KiB/s (1728kB/s)(16.5MiB/10013msec) 00:31:35.335 slat (nsec): min=9080, max=54511, avg=26569.18, stdev=7711.79 00:31:35.335 clat (usec): min=30526, max=58792, avg=37702.37, stdev=1393.67 00:31:35.335 lat (usec): min=30554, max=58809, avg=37728.94, stdev=1392.80 00:31:35.335 clat percentiles (usec): 00:31:35.335 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:35.335 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:31:35.335 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:35.335 | 99.00th=[38536], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:31:35.335 | 99.99th=[58983] 00:31:35.335 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1683.20, stdev=62.64, samples=20 00:31:35.335 iops : min= 384, max= 448, avg=420.80, stdev=15.66, samples=20 00:31:35.335 lat (msec) : 50=99.62%, 100=0.38% 00:31:35.335 cpu : usr=98.90%, sys=0.67%, ctx=13, majf=0, minf=63 00:31:35.335 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:35.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.335 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.335 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:35.335 00:31:35.335 Run status group 0 (all jobs): 00:31:35.335 READ: bw=39.7MiB/s (41.6MB/s), 1686KiB/s-1749KiB/s (1726kB/s-1791kB/s), io=397MiB (417MB), run=10003-10023msec 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 bdev_null0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 [2024-07-15 11:47:08.806573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 bdev_null1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.335 { 00:31:35.335 "params": { 00:31:35.335 "name": "Nvme$subsystem", 00:31:35.335 "trtype": "$TEST_TRANSPORT", 00:31:35.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.335 "adrfam": "ipv4", 00:31:35.335 "trsvcid": "$NVMF_PORT", 00:31:35.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.335 "hdgst": ${hdgst:-false}, 00:31:35.335 "ddgst": ${ddgst:-false} 00:31:35.335 }, 00:31:35.335 "method": "bdev_nvme_attach_controller" 00:31:35.335 } 00:31:35.335 EOF 00:31:35.335 )") 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:35.335 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.336 { 00:31:35.336 "params": { 00:31:35.336 "name": "Nvme$subsystem", 00:31:35.336 "trtype": "$TEST_TRANSPORT", 00:31:35.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.336 "adrfam": "ipv4", 00:31:35.336 "trsvcid": "$NVMF_PORT", 00:31:35.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.336 "hdgst": ${hdgst:-false}, 00:31:35.336 "ddgst": ${ddgst:-false} 00:31:35.336 }, 00:31:35.336 "method": "bdev_nvme_attach_controller" 00:31:35.336 } 00:31:35.336 EOF 00:31:35.336 )") 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:35.336 "params": { 00:31:35.336 "name": "Nvme0", 00:31:35.336 "trtype": "tcp", 00:31:35.336 "traddr": "10.0.0.2", 00:31:35.336 "adrfam": "ipv4", 00:31:35.336 "trsvcid": "4420", 00:31:35.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.336 "hdgst": false, 00:31:35.336 "ddgst": false 00:31:35.336 }, 00:31:35.336 "method": "bdev_nvme_attach_controller" 00:31:35.336 },{ 00:31:35.336 "params": { 00:31:35.336 "name": "Nvme1", 00:31:35.336 "trtype": "tcp", 00:31:35.336 "traddr": "10.0.0.2", 00:31:35.336 "adrfam": "ipv4", 00:31:35.336 "trsvcid": "4420", 00:31:35.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:35.336 "hdgst": false, 00:31:35.336 "ddgst": false 00:31:35.336 }, 00:31:35.336 "method": "bdev_nvme_attach_controller" 00:31:35.336 }' 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:35.336 11:47:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.336 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:35.336 ... 00:31:35.336 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:35.336 ... 00:31:35.336 fio-3.35 00:31:35.336 Starting 4 threads 00:31:35.336 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.908 00:31:41.908 filename0: (groupid=0, jobs=1): err= 0: pid=2995619: Mon Jul 15 11:47:15 2024 00:31:41.909 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 00:31:41.909 slat (nsec): min=9249, max=42016, avg=12944.38, stdev=3976.66 00:31:41.909 clat (usec): min=968, max=7820, avg=4268.51, stdev=761.97 00:31:41.909 lat (usec): min=984, max=7830, avg=4281.45, stdev=761.93 00:31:41.909 clat percentiles (usec): 00:31:41.909 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3425], 20.00th=[ 3720], 00:31:41.909 | 30.00th=[ 3916], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4424], 00:31:41.909 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5800], 00:31:41.909 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7570], 99.95th=[ 7767], 00:31:41.909 | 99.99th=[ 7832] 00:31:41.909 bw ( KiB/s): min=13712, max=15840, per=26.31%, avg=14852.80, stdev=643.55, samples=10 00:31:41.909 iops : min= 1714, max= 1980, avg=1856.60, stdev=80.44, samples=10 00:31:41.909 lat (usec) : 1000=0.01% 00:31:41.909 lat (msec) : 2=0.24%, 4=33.18%, 10=66.57% 00:31:41.909 cpu : usr=96.42%, sys=3.20%, ctx=9, majf=0, minf=9 00:31:41.909 IO depths : 1=0.3%, 2=6.5%, 4=65.6%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 issued rwts: total=9291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:41.909 filename0: (groupid=0, jobs=1): err= 0: pid=2995620: Mon Jul 15 11:47:15 2024 00:31:41.909 read: IOPS=1725, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5001msec) 00:31:41.909 slat (nsec): min=7590, max=52070, avg=13589.49, stdev=4446.91 00:31:41.909 clat (usec): min=890, max=8195, avg=4592.61, stdev=793.95 00:31:41.909 lat (usec): min=906, max=8211, avg=4606.20, stdev=793.48 00:31:41.909 clat percentiles (usec): 00:31:41.909 | 1.00th=[ 3097], 5.00th=[ 3621], 10.00th=[ 3851], 20.00th=[ 4113], 00:31:41.909 | 30.00th=[ 4228], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:41.909 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5604], 95.00th=[ 6390], 00:31:41.909 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 7898], 99.95th=[ 8094], 00:31:41.909 | 99.99th=[ 8225] 00:31:41.909 bw ( KiB/s): min=13264, max=14384, per=24.48%, avg=13816.89, stdev=325.17, samples=9 00:31:41.909 iops : min= 1658, max= 1798, avg=1727.11, stdev=40.65, samples=9 00:31:41.909 lat (usec) : 1000=0.05% 00:31:41.909 lat (msec) : 2=0.12%, 4=13.94%, 10=85.90% 00:31:41.909 cpu : usr=96.66%, sys=2.94%, ctx=14, majf=0, minf=9 00:31:41.909 IO depths : 1=0.2%, 2=7.3%, 4=65.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 issued rwts: total=8631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:41.909 filename1: (groupid=0, jobs=1): err= 0: pid=2995621: Mon Jul 15 11:47:15 2024 00:31:41.909 read: IOPS=1788, BW=14.0MiB/s (14.6MB/s)(69.9MiB/5003msec) 00:31:41.909 slat (nsec): min=7609, max=42037, avg=13517.21, stdev=4250.24 00:31:41.909 clat (usec): min=963, max=8372, avg=4432.77, stdev=806.73 00:31:41.909 lat (usec): min=973, max=8390, avg=4446.28, stdev=806.72 00:31:41.909 clat percentiles (usec): 00:31:41.909 | 1.00th=[ 2802], 5.00th=[ 3294], 10.00th=[ 3556], 20.00th=[ 3884], 00:31:41.909 | 30.00th=[ 4080], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:41.909 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5407], 95.00th=[ 6128], 00:31:41.909 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 7898], 99.95th=[ 8094], 00:31:41.909 | 99.99th=[ 8356] 00:31:41.909 bw ( KiB/s): min=13328, max=15744, per=25.34%, avg=14302.40, stdev=827.73, samples=10 00:31:41.909 iops : min= 1666, max= 1968, avg=1787.80, stdev=103.47, samples=10 00:31:41.909 lat (usec) : 1000=0.01% 00:31:41.909 lat (msec) : 2=0.11%, 4=24.68%, 10=75.20% 00:31:41.909 cpu : usr=96.20%, sys=3.40%, ctx=9, majf=0, minf=10 00:31:41.909 IO depths : 1=0.1%, 2=7.0%, 4=64.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 issued rwts: total=8947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:41.909 filename1: (groupid=0, jobs=1): err= 0: pid=2995622: Mon Jul 15 11:47:15 2024 00:31:41.909 read: IOPS=1685, BW=13.2MiB/s (13.8MB/s)(65.9MiB/5002msec) 00:31:41.909 slat (nsec): min=8516, max=42074, avg=13151.73, stdev=4373.26 00:31:41.909 clat (usec): min=702, max=8574, avg=4704.91, stdev=814.70 00:31:41.909 lat (usec): min=717, max=8583, avg=4718.06, stdev=814.14 00:31:41.909 clat percentiles (usec): 00:31:41.909 | 1.00th=[ 3294], 5.00th=[ 3785], 10.00th=[ 4047], 20.00th=[ 4178], 00:31:41.909 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:31:41.909 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5866], 95.00th=[ 6587], 00:31:41.909 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 8225], 99.95th=[ 8356], 00:31:41.909 | 99.99th=[ 8586] 00:31:41.909 bw ( KiB/s): min=13008, max=14176, per=23.89%, avg=13484.20, stdev=334.87, samples=10 00:31:41.909 iops : min= 1626, max= 1772, avg=1685.50, stdev=41.86, samples=10 00:31:41.909 lat (usec) : 750=0.01%, 1000=0.05% 00:31:41.909 lat (msec) : 2=0.17%, 4=7.98%, 10=91.79% 00:31:41.909 cpu : usr=96.62%, sys=2.92%, ctx=6, majf=0, minf=9 00:31:41.909 IO depths : 1=0.2%, 2=5.8%, 4=66.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.909 issued rwts: total=8431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:41.909 00:31:41.909 Run status group 0 (all jobs): 00:31:41.909 READ: bw=55.1MiB/s (57.8MB/s), 13.2MiB/s-14.5MiB/s (13.8MB/s-15.2MB/s), io=276MiB (289MB), run=5001-5003msec 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 00:31:41.909 real 0m25.018s 00:31:41.909 user 5m8.306s 00:31:41.909 sys 0m4.811s 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 ************************************ 00:31:41.909 END TEST fio_dif_rand_params 00:31:41.909 ************************************ 00:31:41.909 11:47:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:41.909 11:47:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:41.909 11:47:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:41.909 11:47:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 ************************************ 00:31:41.909 START TEST fio_dif_digest 00:31:41.909 ************************************ 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 bdev_null0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.909 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:41.910 [2024-07-15 11:47:15.487078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.910 { 00:31:41.910 "params": { 00:31:41.910 "name": "Nvme$subsystem", 00:31:41.910 "trtype": "$TEST_TRANSPORT", 00:31:41.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.910 "adrfam": "ipv4", 00:31:41.910 "trsvcid": "$NVMF_PORT", 00:31:41.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.910 "hdgst": ${hdgst:-false}, 00:31:41.910 "ddgst": ${ddgst:-false} 00:31:41.910 }, 00:31:41.910 "method": "bdev_nvme_attach_controller" 00:31:41.910 } 00:31:41.910 EOF 00:31:41.910 )") 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:41.910 "params": { 00:31:41.910 "name": "Nvme0", 00:31:41.910 "trtype": "tcp", 00:31:41.910 "traddr": "10.0.0.2", 00:31:41.910 "adrfam": "ipv4", 00:31:41.910 "trsvcid": "4420", 00:31:41.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:41.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:41.910 "hdgst": true, 00:31:41.910 "ddgst": true 00:31:41.910 }, 00:31:41.910 "method": "bdev_nvme_attach_controller" 00:31:41.910 }' 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:41.910 11:47:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.910 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:41.910 ... 00:31:41.910 fio-3.35 00:31:41.910 Starting 3 threads 00:31:41.910 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.112 00:31:54.112 filename0: (groupid=0, jobs=1): err= 0: pid=2996987: Mon Jul 15 11:47:26 2024 00:31:54.112 read: IOPS=201, BW=25.1MiB/s (26.4MB/s)(253MiB/10048msec) 00:31:54.112 slat (nsec): min=9717, max=59580, avg=21214.99, stdev=8667.37 00:31:54.112 clat (usec): min=11440, max=54860, avg=14868.09, stdev=1578.28 00:31:54.112 lat (usec): min=11457, max=54877, avg=14889.30, stdev=1577.78 00:31:54.112 clat percentiles (usec): 00:31:54.112 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:31:54.112 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:31:54.112 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16057], 95.00th=[16450], 00:31:54.112 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[52167], 00:31:54.112 | 99.99th=[54789] 00:31:54.112 bw ( KiB/s): min=25088, max=26368, per=36.26%, avg=25843.20, stdev=419.21, samples=20 00:31:54.112 iops : min= 196, max= 206, avg=201.90, stdev= 3.28, samples=20 00:31:54.112 lat (msec) : 20=99.90%, 100=0.10% 00:31:54.112 cpu : usr=93.25%, sys=4.88%, ctx=581, majf=0, minf=114 00:31:54.112 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:54.112 filename0: (groupid=0, jobs=1): err= 0: pid=2996988: Mon Jul 15 11:47:26 2024 00:31:54.112 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(225MiB/10048msec) 00:31:54.112 slat (nsec): min=9966, max=57724, avg=20736.87, stdev=8380.80 00:31:54.112 clat (usec): min=12907, max=57269, avg=16688.26, stdev=1633.45 00:31:54.112 lat (usec): min=12943, max=57304, avg=16709.00, stdev=1633.86 00:31:54.112 clat percentiles (usec): 00:31:54.112 | 1.00th=[14222], 5.00th=[14877], 10.00th=[15270], 20.00th=[15795], 00:31:54.112 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:31:54.112 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:31:54.112 | 99.00th=[19268], 99.50th=[20055], 99.90th=[49021], 99.95th=[57410], 00:31:54.112 | 99.99th=[57410] 00:31:54.112 bw ( KiB/s): min=22316, max=23552, per=32.31%, avg=23029.40, stdev=380.45, samples=20 00:31:54.112 iops : min= 174, max= 184, avg=179.90, stdev= 3.01, samples=20 00:31:54.112 lat (msec) : 20=99.44%, 50=0.50%, 100=0.06% 00:31:54.112 cpu : usr=96.63%, sys=3.02%, ctx=19, majf=0, minf=115 00:31:54.112 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 issued rwts: total=1801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:54.112 filename0: (groupid=0, jobs=1): err= 0: pid=2996989: Mon Jul 15 11:47:26 2024 00:31:54.112 read: IOPS=176, BW=22.1MiB/s (23.1MB/s)(222MiB/10046msec) 00:31:54.112 slat (nsec): min=9863, max=71967, avg=28016.79, stdev=12016.05 00:31:54.112 clat (usec): min=13681, max=51322, avg=16942.21, stdev=1516.16 00:31:54.112 lat (usec): min=13696, max=51363, avg=16970.22, stdev=1516.64 00:31:54.112 clat percentiles (usec): 00:31:54.112 | 1.00th=[14484], 5.00th=[15270], 10.00th=[15664], 20.00th=[16057], 00:31:54.112 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:31:54.112 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:31:54.112 | 99.00th=[19530], 99.50th=[20317], 99.90th=[48497], 99.95th=[51119], 00:31:54.112 | 99.99th=[51119] 00:31:54.112 bw ( KiB/s): min=21760, max=23040, per=31.80%, avg=22668.80, stdev=326.73, samples=20 00:31:54.112 iops : min= 170, max= 180, avg=177.10, stdev= 2.55, samples=20 00:31:54.112 lat (msec) : 20=99.44%, 50=0.51%, 100=0.06% 00:31:54.112 cpu : usr=95.13%, sys=4.36%, ctx=43, majf=0, minf=109 00:31:54.112 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.112 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:54.112 00:31:54.112 Run status group 0 (all jobs): 00:31:54.112 READ: bw=69.6MiB/s (73.0MB/s), 22.1MiB/s-25.1MiB/s (23.1MB/s-26.4MB/s), io=699MiB (733MB), run=10046-10048msec 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.112 00:31:54.112 real 0m11.373s 00:31:54.112 user 0m39.975s 00:31:54.112 sys 0m1.619s 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.112 11:47:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.112 ************************************ 00:31:54.112 END TEST fio_dif_digest 00:31:54.112 ************************************ 00:31:54.112 11:47:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:54.112 11:47:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:54.112 11:47:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:54.112 11:47:26 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:54.112 11:47:26 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:54.112 11:47:26 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:54.112 11:47:26 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:54.113 rmmod nvme_tcp 00:31:54.113 rmmod nvme_fabrics 00:31:54.113 rmmod nvme_keyring 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2987448 ']' 00:31:54.113 11:47:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2987448 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2987448 ']' 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2987448 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2987448 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2987448' 00:31:54.113 killing process with pid 2987448 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2987448 00:31:54.113 11:47:26 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2987448 00:31:54.113 11:47:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:54.113 11:47:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:55.492 Waiting for block devices as requested 00:31:55.492 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:31:55.751 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:55.751 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:55.751 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:56.010 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:56.010 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:56.010 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:56.269 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:56.269 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:56.269 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:56.269 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:56.528 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:56.528 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:56.528 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:56.785 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:56.785 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:56.785 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:57.045 11:47:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.045 11:47:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.045 11:47:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.045 11:47:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.045 11:47:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.045 11:47:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:57.045 11:47:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.950 11:47:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.950 00:31:58.950 real 1m15.812s 00:31:58.950 user 7m43.080s 00:31:58.950 sys 0m19.462s 00:31:58.950 11:47:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.950 11:47:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.950 ************************************ 00:31:58.950 END TEST nvmf_dif 00:31:58.950 ************************************ 00:31:58.950 11:47:33 -- common/autotest_common.sh@1142 -- # return 0 00:31:58.950 11:47:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:58.950 11:47:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:58.950 11:47:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.950 11:47:33 -- common/autotest_common.sh@10 -- # set +x 00:31:58.950 ************************************ 00:31:58.950 START TEST nvmf_abort_qd_sizes 00:31:58.950 ************************************ 00:31:58.950 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:59.209 * Looking for test storage... 00:31:59.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.209 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:05.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:05.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:05.788 Found net devices under 0000:af:00.0: cvl_0_0 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:05.788 Found net devices under 0000:af:00.1: cvl_0_1 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:05.788 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:05.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:32:05.789 00:32:05.789 --- 10.0.0.2 ping statistics --- 00:32:05.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.789 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:32:05.789 00:32:05.789 --- 10.0.0.1 ping statistics --- 00:32:05.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.789 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:05.789 11:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:07.690 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:07.690 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:07.947 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:07.947 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:07.947 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:08.883 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3005072 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3005072 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3005072 ']' 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:08.883 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.883 [2024-07-15 11:47:43.254613] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:32:08.883 [2024-07-15 11:47:43.254667] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.883 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.883 [2024-07-15 11:47:43.340244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.141 [2024-07-15 11:47:43.436954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.141 [2024-07-15 11:47:43.436997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.141 [2024-07-15 11:47:43.437007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.141 [2024-07-15 11:47:43.437016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.141 [2024-07-15 11:47:43.437023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.141 [2024-07-15 11:47:43.437080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.141 [2024-07-15 11:47:43.437192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.141 [2024-07-15 11:47:43.437305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.141 [2024-07-15 11:47:43.437305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.418 11:47:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:09.418 ************************************ 00:32:09.418 START TEST spdk_target_abort 00:32:09.418 ************************************ 00:32:09.418 11:47:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:09.418 11:47:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:09.418 11:47:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:32:09.418 11:47:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.418 11:47:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 spdk_targetn1 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 [2024-07-15 11:47:46.653490] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 [2024-07-15 11:47:46.701857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:12.741 11:47:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:12.741 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.024 Initializing NVMe Controllers 00:32:16.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:16.024 Initialization complete. Launching workers. 00:32:16.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6421, failed: 0 00:32:16.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1187, failed to submit 5234 00:32:16.024 success 706, unsuccess 481, failed 0 00:32:16.024 11:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.024 11:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.024 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.312 Initializing NVMe Controllers 00:32:19.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:19.312 Initialization complete. Launching workers. 00:32:19.312 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8592, failed: 0 00:32:19.312 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7351 00:32:19.312 success 317, unsuccess 924, failed 0 00:32:19.312 11:47:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:19.312 11:47:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.312 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.599 Initializing NVMe Controllers 00:32:22.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:22.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:22.599 Initialization complete. Launching workers. 00:32:22.599 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17964, failed: 0 00:32:22.599 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1984, failed to submit 15980 00:32:22.599 success 121, unsuccess 1863, failed 0 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.599 11:47:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3005072 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3005072 ']' 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3005072 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3005072 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3005072' 00:32:23.536 killing process with pid 3005072 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3005072 00:32:23.536 11:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3005072 00:32:23.795 00:32:23.795 real 0m14.234s 00:32:23.795 user 0m55.197s 00:32:23.795 sys 0m2.118s 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.795 ************************************ 00:32:23.795 END TEST spdk_target_abort 00:32:23.795 ************************************ 00:32:23.795 11:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:23.795 11:47:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:23.795 11:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:23.795 11:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.795 11:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:23.795 ************************************ 00:32:23.795 START TEST kernel_target_abort 00:32:23.795 ************************************ 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:23.795 11:47:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:26.354 Waiting for block devices as requested 00:32:26.354 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:26.354 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:26.612 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:26.612 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:26.612 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:26.870 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:26.870 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:26.870 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:26.870 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:27.127 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:27.127 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:27.127 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:27.385 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:27.385 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:27.386 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:27.386 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:27.645 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:27.645 No valid GPT data, bailing 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:27.645 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:27.904 00:32:27.904 Discovery Log Number of Records 2, Generation counter 2 00:32:27.904 =====Discovery Log Entry 0====== 00:32:27.904 trtype: tcp 00:32:27.904 adrfam: ipv4 00:32:27.904 subtype: current discovery subsystem 00:32:27.904 treq: not specified, sq flow control disable supported 00:32:27.904 portid: 1 00:32:27.904 trsvcid: 4420 00:32:27.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:27.904 traddr: 10.0.0.1 00:32:27.904 eflags: none 00:32:27.904 sectype: none 00:32:27.904 =====Discovery Log Entry 1====== 00:32:27.904 trtype: tcp 00:32:27.904 adrfam: ipv4 00:32:27.904 subtype: nvme subsystem 00:32:27.904 treq: not specified, sq flow control disable supported 00:32:27.904 portid: 1 00:32:27.904 trsvcid: 4420 00:32:27.904 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:27.904 traddr: 10.0.0.1 00:32:27.904 eflags: none 00:32:27.904 sectype: none 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:27.904 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.905 11:48:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.905 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.193 Initializing NVMe Controllers 00:32:31.193 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:31.193 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:31.193 Initialization complete. Launching workers. 00:32:31.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53385, failed: 0 00:32:31.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53385, failed to submit 0 00:32:31.193 success 0, unsuccess 53385, failed 0 00:32:31.193 11:48:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.193 11:48:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.193 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.527 Initializing NVMe Controllers 00:32:34.527 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.527 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.527 Initialization complete. Launching workers. 00:32:34.528 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87859, failed: 0 00:32:34.528 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22038, failed to submit 65821 00:32:34.528 success 0, unsuccess 22038, failed 0 00:32:34.528 11:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.528 11:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:34.528 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.063 Initializing NVMe Controllers 00:32:37.063 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:37.063 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:37.063 Initialization complete. Launching workers. 00:32:37.063 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83976, failed: 0 00:32:37.063 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20962, failed to submit 63014 00:32:37.063 success 0, unsuccess 20962, failed 0 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:37.063 11:48:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:39.598 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:39.857 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:40.795 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:40.795 00:32:40.795 real 0m17.098s 00:32:40.795 user 0m8.326s 00:32:40.795 sys 0m4.998s 00:32:40.795 11:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.795 11:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:40.795 ************************************ 00:32:40.795 END TEST kernel_target_abort 00:32:40.795 ************************************ 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.795 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.795 rmmod nvme_tcp 00:32:40.795 rmmod nvme_fabrics 00:32:41.055 rmmod nvme_keyring 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3005072 ']' 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3005072 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3005072 ']' 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3005072 00:32:41.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3005072) - No such process 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3005072 is not found' 00:32:41.055 Process with pid 3005072 is not found 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:41.055 11:48:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:43.590 Waiting for block devices as requested 00:32:43.590 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:43.850 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:43.850 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:44.109 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:44.109 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:44.109 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:44.368 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:44.368 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:44.368 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:44.368 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:44.627 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:44.627 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:44.627 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:44.886 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:44.886 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:44.886 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:44.886 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:45.145 11:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:45.145 11:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:45.145 11:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.145 11:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:45.146 11:48:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.146 11:48:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:45.146 11:48:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.051 11:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.051 00:32:47.051 real 0m48.090s 00:32:47.051 user 1m7.870s 00:32:47.051 sys 0m15.875s 00:32:47.051 11:48:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.051 11:48:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:47.051 ************************************ 00:32:47.051 END TEST nvmf_abort_qd_sizes 00:32:47.051 ************************************ 00:32:47.310 11:48:21 -- common/autotest_common.sh@1142 -- # return 0 00:32:47.310 11:48:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:47.310 11:48:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:47.310 11:48:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.310 11:48:21 -- common/autotest_common.sh@10 -- # set +x 00:32:47.310 ************************************ 00:32:47.310 START TEST keyring_file 00:32:47.310 ************************************ 00:32:47.310 11:48:21 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:47.310 * Looking for test storage... 00:32:47.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:47.310 11:48:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:47.310 11:48:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.310 11:48:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.311 11:48:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.311 11:48:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.311 11:48:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.311 11:48:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.311 11:48:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.311 11:48:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.311 11:48:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:47.311 11:48:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DEroAOiieI 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DEroAOiieI 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DEroAOiieI 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DEroAOiieI 00:32:47.311 11:48:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FdQrlu1Kbg 00:32:47.311 11:48:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:47.311 11:48:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:47.570 11:48:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FdQrlu1Kbg 00:32:47.570 11:48:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FdQrlu1Kbg 00:32:47.571 11:48:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FdQrlu1Kbg 00:32:47.571 11:48:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=3014781 00:32:47.571 11:48:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3014781 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3014781 ']' 00:32:47.571 11:48:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:47.571 11:48:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:47.571 [2024-07-15 11:48:21.858718] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:32:47.571 [2024-07-15 11:48:21.858781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014781 ] 00:32:47.571 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.571 [2024-07-15 11:48:21.943288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.571 [2024-07-15 11:48:22.033502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:48.948 11:48:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.948 [2024-07-15 11:48:23.049043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.948 null0 00:32:48.948 [2024-07-15 11:48:23.081072] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:48.948 [2024-07-15 11:48:23.081385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:48.948 [2024-07-15 11:48:23.089084] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.948 11:48:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.948 [2024-07-15 11:48:23.101122] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:48.948 request: 00:32:48.948 { 00:32:48.948 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:48.948 "secure_channel": false, 00:32:48.948 "listen_address": { 00:32:48.948 "trtype": "tcp", 00:32:48.948 "traddr": "127.0.0.1", 00:32:48.948 "trsvcid": "4420" 00:32:48.948 }, 00:32:48.948 "method": "nvmf_subsystem_add_listener", 00:32:48.948 "req_id": 1 00:32:48.948 } 00:32:48.948 Got JSON-RPC error response 00:32:48.948 response: 00:32:48.948 { 00:32:48.948 "code": -32602, 00:32:48.948 "message": "Invalid parameters" 00:32:48.948 } 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:48.948 11:48:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=3014975 00:32:48.948 11:48:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3014975 /var/tmp/bperf.sock 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3014975 ']' 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.948 11:48:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.948 11:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.948 [2024-07-15 11:48:23.185669] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:32:48.948 [2024-07-15 11:48:23.185774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014975 ] 00:32:48.948 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.948 [2024-07-15 11:48:23.302048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.948 [2024-07-15 11:48:23.407581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.207 11:48:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.207 11:48:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:49.207 11:48:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:49.207 11:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:49.465 11:48:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FdQrlu1Kbg 00:32:49.465 11:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FdQrlu1Kbg 00:32:49.724 11:48:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:49.724 11:48:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:49.724 11:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:49.724 11:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:49.724 11:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:49.983 11:48:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DEroAOiieI == \/\t\m\p\/\t\m\p\.\D\E\r\o\A\O\i\i\e\I ]] 00:32:49.983 11:48:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:49.983 11:48:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:49.983 11:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:49.983 11:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:49.983 11:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.242 11:48:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FdQrlu1Kbg == \/\t\m\p\/\t\m\p\.\F\d\Q\r\l\u\1\K\b\g ]] 00:32:50.242 11:48:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:50.242 11:48:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:50.242 11:48:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.242 11:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.242 11:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:50.242 11:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.500 11:48:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:50.500 11:48:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:50.500 11:48:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:50.500 11:48:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.500 11:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.500 11:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.500 11:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.781 11:48:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:50.781 11:48:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:50.781 11:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.067 [2024-07-15 11:48:25.386289] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.067 nvme0n1 00:32:51.067 11:48:25 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:51.067 11:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:51.067 11:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.067 11:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.067 11:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:51.067 11:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.324 11:48:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:51.324 11:48:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:51.324 11:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:51.324 11:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.324 11:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.324 11:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.324 11:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:51.582 11:48:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:51.582 11:48:25 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.839 Running I/O for 1 seconds... 00:32:52.770 00:32:52.770 Latency(us) 00:32:52.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.770 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:52.770 nvme0n1 : 1.01 9445.63 36.90 0.00 0.00 13502.74 4885.41 19779.96 00:32:52.770 =================================================================================================================== 00:32:52.770 Total : 9445.63 36.90 0.00 0.00 13502.74 4885.41 19779.96 00:32:52.770 0 00:32:52.770 11:48:27 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.770 11:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.027 11:48:27 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:53.027 11:48:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.027 11:48:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.027 11:48:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.027 11:48:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.027 11:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.285 11:48:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:53.285 11:48:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:53.285 11:48:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:53.285 11:48:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.285 11:48:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.285 11:48:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:53.285 11:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.543 11:48:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:53.543 11:48:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.543 11:48:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:53.543 11:48:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.544 11:48:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:53.544 11:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.544 11:48:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:53.544 11:48:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.544 11:48:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.544 11:48:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.802 [2024-07-15 11:48:28.150668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.802 [2024-07-15 11:48:28.151197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9ce0 (107): Transport endpoint is not connected 00:32:53.802 [2024-07-15 11:48:28.152189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9ce0 (9): Bad file descriptor 00:32:53.802 [2024-07-15 11:48:28.153189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:53.802 [2024-07-15 11:48:28.153216] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.802 [2024-07-15 11:48:28.153228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:53.802 request: 00:32:53.802 { 00:32:53.802 "name": "nvme0", 00:32:53.802 "trtype": "tcp", 00:32:53.802 "traddr": "127.0.0.1", 00:32:53.802 "adrfam": "ipv4", 00:32:53.802 "trsvcid": "4420", 00:32:53.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.802 "prchk_reftag": false, 00:32:53.802 "prchk_guard": false, 00:32:53.802 "hdgst": false, 00:32:53.802 "ddgst": false, 00:32:53.802 "psk": "key1", 00:32:53.802 "method": "bdev_nvme_attach_controller", 00:32:53.802 "req_id": 1 00:32:53.802 } 00:32:53.802 Got JSON-RPC error response 00:32:53.802 response: 00:32:53.802 { 00:32:53.802 "code": -5, 00:32:53.802 "message": "Input/output error" 00:32:53.802 } 00:32:53.802 11:48:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:53.802 11:48:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:53.802 11:48:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:53.802 11:48:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:53.802 11:48:28 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:53.802 11:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.802 11:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.802 11:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.802 11:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.802 11:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.061 11:48:28 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:54.061 11:48:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:54.061 11:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:54.061 11:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:54.061 11:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.061 11:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.061 11:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:54.319 11:48:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:54.319 11:48:28 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:54.319 11:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:54.577 11:48:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:54.577 11:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:54.835 11:48:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:54.835 11:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.835 11:48:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:55.091 11:48:29 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:55.091 11:48:29 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DEroAOiieI 00:32:55.091 11:48:29 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.091 11:48:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:55.091 11:48:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.091 11:48:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:55.092 11:48:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.092 11:48:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:55.092 11:48:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.092 11:48:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.092 11:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.349 [2024-07-15 11:48:29.649157] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DEroAOiieI': 0100660 00:32:55.349 [2024-07-15 11:48:29.649192] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:55.349 request: 00:32:55.349 { 00:32:55.349 "name": "key0", 00:32:55.349 "path": "/tmp/tmp.DEroAOiieI", 00:32:55.349 "method": "keyring_file_add_key", 00:32:55.349 "req_id": 1 00:32:55.349 } 00:32:55.349 Got JSON-RPC error response 00:32:55.349 response: 00:32:55.349 { 00:32:55.349 "code": -1, 00:32:55.349 "message": "Operation not permitted" 00:32:55.349 } 00:32:55.349 11:48:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:55.349 11:48:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:55.349 11:48:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:55.349 11:48:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:55.349 11:48:29 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DEroAOiieI 00:32:55.350 11:48:29 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.350 11:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEroAOiieI 00:32:55.607 11:48:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DEroAOiieI 00:32:55.607 11:48:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:55.607 11:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.607 11:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.607 11:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.607 11:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.607 11:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.866 11:48:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:55.866 11:48:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.866 11:48:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.866 11:48:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.125 [2024-07-15 11:48:30.427317] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DEroAOiieI': No such file or directory 00:32:56.125 [2024-07-15 11:48:30.427353] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:56.125 [2024-07-15 11:48:30.427391] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:56.125 [2024-07-15 11:48:30.427402] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:56.125 [2024-07-15 11:48:30.427413] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:56.125 request: 00:32:56.125 { 00:32:56.125 "name": "nvme0", 00:32:56.125 "trtype": "tcp", 00:32:56.125 "traddr": "127.0.0.1", 00:32:56.125 "adrfam": "ipv4", 00:32:56.125 "trsvcid": "4420", 00:32:56.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:56.125 "prchk_reftag": false, 00:32:56.125 "prchk_guard": false, 00:32:56.125 "hdgst": false, 00:32:56.125 "ddgst": false, 00:32:56.125 "psk": "key0", 00:32:56.125 "method": "bdev_nvme_attach_controller", 00:32:56.125 "req_id": 1 00:32:56.125 } 00:32:56.125 Got JSON-RPC error response 00:32:56.125 response: 00:32:56.125 { 00:32:56.125 "code": -19, 00:32:56.125 "message": "No such device" 00:32:56.125 } 00:32:56.125 11:48:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:56.125 11:48:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:56.125 11:48:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:56.125 11:48:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:56.125 11:48:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:56.125 11:48:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:56.383 11:48:30 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:56.383 11:48:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:56.383 11:48:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.m9lHS7OnFz 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:56.384 11:48:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.m9lHS7OnFz 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.m9lHS7OnFz 00:32:56.384 11:48:30 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.m9lHS7OnFz 00:32:56.384 11:48:30 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9lHS7OnFz 00:32:56.384 11:48:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m9lHS7OnFz 00:32:56.642 11:48:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.642 11:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.934 nvme0n1 00:32:56.934 11:48:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:56.934 11:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:56.934 11:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.934 11:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.934 11:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:56.934 11:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.192 11:48:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:57.192 11:48:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:57.192 11:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:57.451 11:48:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:57.451 11:48:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:57.451 11:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.451 11:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.451 11:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.709 11:48:32 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:57.709 11:48:32 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:57.709 11:48:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.709 11:48:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.709 11:48:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.709 11:48:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.709 11:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.967 11:48:32 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:57.967 11:48:32 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:57.967 11:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:58.226 11:48:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:58.226 11:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.226 11:48:32 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:58.484 11:48:32 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:58.484 11:48:32 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m9lHS7OnFz 00:32:58.484 11:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m9lHS7OnFz 00:32:58.743 11:48:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FdQrlu1Kbg 00:32:58.743 11:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FdQrlu1Kbg 00:32:59.011 11:48:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.011 11:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.270 nvme0n1 00:32:59.270 11:48:33 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:59.270 11:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:59.529 11:48:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:59.529 "subsystems": [ 00:32:59.529 { 00:32:59.529 "subsystem": "keyring", 00:32:59.529 "config": [ 00:32:59.529 { 00:32:59.529 "method": "keyring_file_add_key", 00:32:59.529 "params": { 00:32:59.529 "name": "key0", 00:32:59.529 "path": "/tmp/tmp.m9lHS7OnFz" 00:32:59.529 } 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "method": "keyring_file_add_key", 00:32:59.529 "params": { 00:32:59.529 "name": "key1", 00:32:59.529 "path": "/tmp/tmp.FdQrlu1Kbg" 00:32:59.529 } 00:32:59.529 } 00:32:59.529 ] 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "subsystem": "iobuf", 00:32:59.529 "config": [ 00:32:59.529 { 00:32:59.529 "method": "iobuf_set_options", 00:32:59.529 "params": { 00:32:59.529 "small_pool_count": 8192, 00:32:59.529 "large_pool_count": 1024, 00:32:59.529 "small_bufsize": 8192, 00:32:59.529 "large_bufsize": 135168 00:32:59.529 } 00:32:59.529 } 00:32:59.529 ] 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "subsystem": "sock", 00:32:59.529 "config": [ 00:32:59.529 { 00:32:59.529 "method": "sock_set_default_impl", 00:32:59.529 "params": { 00:32:59.529 "impl_name": "posix" 00:32:59.529 } 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "method": "sock_impl_set_options", 00:32:59.529 "params": { 00:32:59.529 "impl_name": "ssl", 00:32:59.529 "recv_buf_size": 4096, 00:32:59.529 "send_buf_size": 4096, 00:32:59.529 "enable_recv_pipe": true, 00:32:59.529 "enable_quickack": false, 00:32:59.529 "enable_placement_id": 0, 00:32:59.529 "enable_zerocopy_send_server": true, 00:32:59.529 "enable_zerocopy_send_client": false, 00:32:59.529 "zerocopy_threshold": 0, 00:32:59.529 "tls_version": 0, 00:32:59.529 "enable_ktls": false 00:32:59.529 } 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "method": "sock_impl_set_options", 00:32:59.529 "params": { 00:32:59.529 "impl_name": "posix", 00:32:59.529 "recv_buf_size": 2097152, 00:32:59.529 "send_buf_size": 2097152, 00:32:59.529 "enable_recv_pipe": true, 00:32:59.529 "enable_quickack": false, 00:32:59.529 "enable_placement_id": 0, 00:32:59.529 "enable_zerocopy_send_server": true, 00:32:59.529 "enable_zerocopy_send_client": false, 00:32:59.529 "zerocopy_threshold": 0, 00:32:59.529 "tls_version": 0, 00:32:59.529 "enable_ktls": false 00:32:59.529 } 00:32:59.529 } 00:32:59.529 ] 00:32:59.529 }, 00:32:59.529 { 00:32:59.529 "subsystem": "vmd", 00:32:59.530 "config": [] 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "subsystem": "accel", 00:32:59.530 "config": [ 00:32:59.530 { 00:32:59.530 "method": "accel_set_options", 00:32:59.530 "params": { 00:32:59.530 "small_cache_size": 128, 00:32:59.530 "large_cache_size": 16, 00:32:59.530 "task_count": 2048, 00:32:59.530 "sequence_count": 2048, 00:32:59.530 "buf_count": 2048 00:32:59.530 } 00:32:59.530 } 00:32:59.530 ] 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "subsystem": "bdev", 00:32:59.530 "config": [ 00:32:59.530 { 00:32:59.530 "method": "bdev_set_options", 00:32:59.530 "params": { 00:32:59.530 "bdev_io_pool_size": 65535, 00:32:59.530 "bdev_io_cache_size": 256, 00:32:59.530 "bdev_auto_examine": true, 00:32:59.530 "iobuf_small_cache_size": 128, 00:32:59.530 "iobuf_large_cache_size": 16 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_raid_set_options", 00:32:59.530 "params": { 00:32:59.530 "process_window_size_kb": 1024 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_iscsi_set_options", 00:32:59.530 "params": { 00:32:59.530 "timeout_sec": 30 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_nvme_set_options", 00:32:59.530 "params": { 00:32:59.530 "action_on_timeout": "none", 00:32:59.530 "timeout_us": 0, 00:32:59.530 "timeout_admin_us": 0, 00:32:59.530 "keep_alive_timeout_ms": 10000, 00:32:59.530 "arbitration_burst": 0, 00:32:59.530 "low_priority_weight": 0, 00:32:59.530 "medium_priority_weight": 0, 00:32:59.530 "high_priority_weight": 0, 00:32:59.530 "nvme_adminq_poll_period_us": 10000, 00:32:59.530 "nvme_ioq_poll_period_us": 0, 00:32:59.530 "io_queue_requests": 512, 00:32:59.530 "delay_cmd_submit": true, 00:32:59.530 "transport_retry_count": 4, 00:32:59.530 "bdev_retry_count": 3, 00:32:59.530 "transport_ack_timeout": 0, 00:32:59.530 "ctrlr_loss_timeout_sec": 0, 00:32:59.530 "reconnect_delay_sec": 0, 00:32:59.530 "fast_io_fail_timeout_sec": 0, 00:32:59.530 "disable_auto_failback": false, 00:32:59.530 "generate_uuids": false, 00:32:59.530 "transport_tos": 0, 00:32:59.530 "nvme_error_stat": false, 00:32:59.530 "rdma_srq_size": 0, 00:32:59.530 "io_path_stat": false, 00:32:59.530 "allow_accel_sequence": false, 00:32:59.530 "rdma_max_cq_size": 0, 00:32:59.530 "rdma_cm_event_timeout_ms": 0, 00:32:59.530 "dhchap_digests": [ 00:32:59.530 "sha256", 00:32:59.530 "sha384", 00:32:59.530 "sha512" 00:32:59.530 ], 00:32:59.530 "dhchap_dhgroups": [ 00:32:59.530 "null", 00:32:59.530 "ffdhe2048", 00:32:59.530 "ffdhe3072", 00:32:59.530 "ffdhe4096", 00:32:59.530 "ffdhe6144", 00:32:59.530 "ffdhe8192" 00:32:59.530 ] 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_nvme_attach_controller", 00:32:59.530 "params": { 00:32:59.530 "name": "nvme0", 00:32:59.530 "trtype": "TCP", 00:32:59.530 "adrfam": "IPv4", 00:32:59.530 "traddr": "127.0.0.1", 00:32:59.530 "trsvcid": "4420", 00:32:59.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.530 "prchk_reftag": false, 00:32:59.530 "prchk_guard": false, 00:32:59.530 "ctrlr_loss_timeout_sec": 0, 00:32:59.530 "reconnect_delay_sec": 0, 00:32:59.530 "fast_io_fail_timeout_sec": 0, 00:32:59.530 "psk": "key0", 00:32:59.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.530 "hdgst": false, 00:32:59.530 "ddgst": false 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_nvme_set_hotplug", 00:32:59.530 "params": { 00:32:59.530 "period_us": 100000, 00:32:59.530 "enable": false 00:32:59.530 } 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "method": "bdev_wait_for_examine" 00:32:59.530 } 00:32:59.530 ] 00:32:59.530 }, 00:32:59.530 { 00:32:59.530 "subsystem": "nbd", 00:32:59.530 "config": [] 00:32:59.530 } 00:32:59.530 ] 00:32:59.530 }' 00:32:59.530 11:48:33 keyring_file -- keyring/file.sh@114 -- # killprocess 3014975 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3014975 ']' 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3014975 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3014975 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3014975' 00:32:59.530 killing process with pid 3014975 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@967 -- # kill 3014975 00:32:59.530 Received shutdown signal, test time was about 1.000000 seconds 00:32:59.530 00:32:59.530 Latency(us) 00:32:59.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.530 =================================================================================================================== 00:32:59.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.530 11:48:33 keyring_file -- common/autotest_common.sh@972 -- # wait 3014975 00:32:59.790 11:48:34 keyring_file -- keyring/file.sh@117 -- # bperfpid=3017044 00:32:59.790 11:48:34 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3017044 /var/tmp/bperf.sock 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3017044 ']' 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.790 11:48:34 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:59.790 11:48:34 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:59.790 "subsystems": [ 00:32:59.790 { 00:32:59.790 "subsystem": "keyring", 00:32:59.790 "config": [ 00:32:59.790 { 00:32:59.790 "method": "keyring_file_add_key", 00:32:59.790 "params": { 00:32:59.790 "name": "key0", 00:32:59.790 "path": "/tmp/tmp.m9lHS7OnFz" 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "keyring_file_add_key", 00:32:59.790 "params": { 00:32:59.790 "name": "key1", 00:32:59.790 "path": "/tmp/tmp.FdQrlu1Kbg" 00:32:59.790 } 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "iobuf", 00:32:59.790 "config": [ 00:32:59.790 { 00:32:59.790 "method": "iobuf_set_options", 00:32:59.790 "params": { 00:32:59.790 "small_pool_count": 8192, 00:32:59.790 "large_pool_count": 1024, 00:32:59.790 "small_bufsize": 8192, 00:32:59.790 "large_bufsize": 135168 00:32:59.790 } 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "sock", 00:32:59.790 "config": [ 00:32:59.790 { 00:32:59.790 "method": "sock_set_default_impl", 00:32:59.790 "params": { 00:32:59.790 "impl_name": "posix" 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "sock_impl_set_options", 00:32:59.790 "params": { 00:32:59.790 "impl_name": "ssl", 00:32:59.790 "recv_buf_size": 4096, 00:32:59.790 "send_buf_size": 4096, 00:32:59.790 "enable_recv_pipe": true, 00:32:59.790 "enable_quickack": false, 00:32:59.790 "enable_placement_id": 0, 00:32:59.790 "enable_zerocopy_send_server": true, 00:32:59.790 "enable_zerocopy_send_client": false, 00:32:59.790 "zerocopy_threshold": 0, 00:32:59.790 "tls_version": 0, 00:32:59.790 "enable_ktls": false 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "sock_impl_set_options", 00:32:59.790 "params": { 00:32:59.790 "impl_name": "posix", 00:32:59.790 "recv_buf_size": 2097152, 00:32:59.790 "send_buf_size": 2097152, 00:32:59.790 "enable_recv_pipe": true, 00:32:59.790 "enable_quickack": false, 00:32:59.790 "enable_placement_id": 0, 00:32:59.790 "enable_zerocopy_send_server": true, 00:32:59.790 "enable_zerocopy_send_client": false, 00:32:59.790 "zerocopy_threshold": 0, 00:32:59.790 "tls_version": 0, 00:32:59.790 "enable_ktls": false 00:32:59.790 } 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "vmd", 00:32:59.790 "config": [] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "accel", 00:32:59.790 "config": [ 00:32:59.790 { 00:32:59.790 "method": "accel_set_options", 00:32:59.790 "params": { 00:32:59.790 "small_cache_size": 128, 00:32:59.790 "large_cache_size": 16, 00:32:59.790 "task_count": 2048, 00:32:59.790 "sequence_count": 2048, 00:32:59.790 "buf_count": 2048 00:32:59.790 } 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "bdev", 00:32:59.790 "config": [ 00:32:59.790 { 00:32:59.790 "method": "bdev_set_options", 00:32:59.790 "params": { 00:32:59.790 "bdev_io_pool_size": 65535, 00:32:59.790 "bdev_io_cache_size": 256, 00:32:59.790 "bdev_auto_examine": true, 00:32:59.790 "iobuf_small_cache_size": 128, 00:32:59.790 "iobuf_large_cache_size": 16 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_raid_set_options", 00:32:59.790 "params": { 00:32:59.790 "process_window_size_kb": 1024 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_iscsi_set_options", 00:32:59.790 "params": { 00:32:59.790 "timeout_sec": 30 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_nvme_set_options", 00:32:59.790 "params": { 00:32:59.790 "action_on_timeout": "none", 00:32:59.790 "timeout_us": 0, 00:32:59.790 "timeout_admin_us": 0, 00:32:59.790 "keep_alive_timeout_ms": 10000, 00:32:59.790 "arbitration_burst": 0, 00:32:59.790 "low_priority_weight": 0, 00:32:59.790 "medium_priority_weight": 0, 00:32:59.790 "high_priority_weight": 0, 00:32:59.790 "nvme_adminq_poll_period_us": 10000, 00:32:59.790 "nvme_ioq_poll_period_us": 0, 00:32:59.790 "io_queue_requests": 512, 00:32:59.790 "delay_cmd_submit": true, 00:32:59.790 "transport_retry_count": 4, 00:32:59.790 "bdev_retry_count": 3, 00:32:59.790 "transport_ack_timeout": 0, 00:32:59.790 "ctrlr_loss_timeout_sec": 0, 00:32:59.790 "reconnect_delay_sec": 0, 00:32:59.790 "fast_io_fail_timeout_sec": 0, 00:32:59.790 "disable_auto_failback": false, 00:32:59.790 "generate_uuids": false, 00:32:59.790 "transport_tos": 0, 00:32:59.790 "nvme_error_stat": false, 00:32:59.790 "rdma_srq_size": 0, 00:32:59.790 "io_path_stat": false, 00:32:59.790 "allow_accel_sequence": false, 00:32:59.790 "rdma_max_cq_size": 0, 00:32:59.790 "rdma_cm_event_timeout_ms": 0, 00:32:59.790 "dhchap_digests": [ 00:32:59.790 "sha256", 00:32:59.790 "sha384", 00:32:59.790 "sha512" 00:32:59.790 ], 00:32:59.790 "dhchap_dhgroups": [ 00:32:59.790 "null", 00:32:59.790 "ffdhe2048", 00:32:59.790 "ffdhe3072", 00:32:59.790 "ffdhe4096", 00:32:59.790 "ffdhe6144", 00:32:59.790 "ffdhe8192" 00:32:59.790 ] 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_nvme_attach_controller", 00:32:59.790 "params": { 00:32:59.790 "name": "nvme0", 00:32:59.790 "trtype": "TCP", 00:32:59.790 "adrfam": "IPv4", 00:32:59.790 "traddr": "127.0.0.1", 00:32:59.790 "trsvcid": "4420", 00:32:59.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.790 "prchk_reftag": false, 00:32:59.790 "prchk_guard": false, 00:32:59.790 "ctrlr_loss_timeout_sec": 0, 00:32:59.790 "reconnect_delay_sec": 0, 00:32:59.790 "fast_io_fail_timeout_sec": 0, 00:32:59.790 "psk": "key0", 00:32:59.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.790 "hdgst": false, 00:32:59.790 "ddgst": false 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_nvme_set_hotplug", 00:32:59.790 "params": { 00:32:59.790 "period_us": 100000, 00:32:59.790 "enable": false 00:32:59.790 } 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "method": "bdev_wait_for_examine" 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }, 00:32:59.790 { 00:32:59.790 "subsystem": "nbd", 00:32:59.790 "config": [] 00:32:59.790 } 00:32:59.790 ] 00:32:59.790 }' 00:32:59.790 11:48:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:59.790 [2024-07-15 11:48:34.228989] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:32:59.790 [2024-07-15 11:48:34.229050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017044 ] 00:33:00.049 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.049 [2024-07-15 11:48:34.310379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.049 [2024-07-15 11:48:34.416282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.308 [2024-07-15 11:48:34.588359] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:00.874 11:48:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.874 11:48:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:00.874 11:48:35 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:00.874 11:48:35 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:00.874 11:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.133 11:48:35 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:01.133 11:48:35 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:01.133 11:48:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:01.133 11:48:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.133 11:48:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.133 11:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.133 11:48:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.392 11:48:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:01.392 11:48:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:01.392 11:48:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:01.392 11:48:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.392 11:48:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.392 11:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.392 11:48:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:01.650 11:48:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:01.650 11:48:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:01.650 11:48:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:01.650 11:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:01.909 11:48:36 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:01.909 11:48:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:01.909 11:48:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.m9lHS7OnFz /tmp/tmp.FdQrlu1Kbg 00:33:01.909 11:48:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3017044 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3017044 ']' 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3017044 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017044 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017044' 00:33:01.909 killing process with pid 3017044 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@967 -- # kill 3017044 00:33:01.909 Received shutdown signal, test time was about 1.000000 seconds 00:33:01.909 00:33:01.909 Latency(us) 00:33:01.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.909 =================================================================================================================== 00:33:01.909 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:01.909 11:48:36 keyring_file -- common/autotest_common.sh@972 -- # wait 3017044 00:33:02.169 11:48:36 keyring_file -- keyring/file.sh@21 -- # killprocess 3014781 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3014781 ']' 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3014781 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3014781 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3014781' 00:33:02.169 killing process with pid 3014781 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@967 -- # kill 3014781 00:33:02.169 [2024-07-15 11:48:36.517587] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:02.169 11:48:36 keyring_file -- common/autotest_common.sh@972 -- # wait 3014781 00:33:02.428 00:33:02.428 real 0m15.298s 00:33:02.428 user 0m38.042s 00:33:02.428 sys 0m3.213s 00:33:02.428 11:48:36 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:02.428 11:48:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:02.428 ************************************ 00:33:02.428 END TEST keyring_file 00:33:02.428 ************************************ 00:33:02.688 11:48:36 -- common/autotest_common.sh@1142 -- # return 0 00:33:02.688 11:48:36 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:02.688 11:48:36 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:02.688 11:48:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:02.688 11:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.688 11:48:36 -- common/autotest_common.sh@10 -- # set +x 00:33:02.688 ************************************ 00:33:02.688 START TEST keyring_linux 00:33:02.688 ************************************ 00:33:02.688 11:48:36 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:02.688 * Looking for test storage... 00:33:02.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:02.688 11:48:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:02.688 11:48:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.688 11:48:37 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.688 11:48:37 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.688 11:48:37 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.688 11:48:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.688 11:48:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.688 11:48:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.688 11:48:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:02.688 11:48:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.688 11:48:37 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:02.689 /tmp/:spdk-test:key0 00:33:02.689 11:48:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:02.689 11:48:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:02.689 11:48:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:02.948 11:48:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:02.948 /tmp/:spdk-test:key1 00:33:02.948 11:48:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3017598 00:33:02.948 11:48:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3017598 00:33:02.948 11:48:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3017598 ']' 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:02.948 11:48:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:02.948 [2024-07-15 11:48:37.209456] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:33:02.948 [2024-07-15 11:48:37.209519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017598 ] 00:33:02.948 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.948 [2024-07-15 11:48:37.289939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.948 [2024-07-15 11:48:37.379476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 [2024-07-15 11:48:38.141815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.913 null0 00:33:03.913 [2024-07-15 11:48:38.173853] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:03.913 [2024-07-15 11:48:38.174234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:03.913 710751301 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:03.913 573124211 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3017833 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3017833 /var/tmp/bperf.sock 00:33:03.913 11:48:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3017833 ']' 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.913 11:48:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 [2024-07-15 11:48:38.245363] Starting SPDK v24.09-pre git sha1 e85883441 / DPDK 24.03.0 initialization... 00:33:03.913 [2024-07-15 11:48:38.245417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017833 ] 00:33:03.913 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.913 [2024-07-15 11:48:38.325027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.173 [2024-07-15 11:48:38.424629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.173 11:48:38 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.173 11:48:38 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:04.173 11:48:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:04.173 11:48:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:04.431 11:48:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:04.431 11:48:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:04.689 11:48:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:04.689 11:48:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:04.948 [2024-07-15 11:48:39.264469] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:04.948 nvme0n1 00:33:04.948 11:48:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:04.948 11:48:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:04.948 11:48:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:04.948 11:48:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:04.948 11:48:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.948 11:48:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:05.208 11:48:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:05.208 11:48:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:05.208 11:48:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:05.208 11:48:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:05.208 11:48:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.208 11:48:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:05.208 11:48:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@25 -- # sn=710751301 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 710751301 == \7\1\0\7\5\1\3\0\1 ]] 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 710751301 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:05.467 11:48:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.727 Running I/O for 1 seconds... 00:33:06.661 00:33:06.661 Latency(us) 00:33:06.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.662 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:06.662 nvme0n1 : 1.01 10093.59 39.43 0.00 0.00 12608.35 8996.31 23950.43 00:33:06.662 =================================================================================================================== 00:33:06.662 Total : 10093.59 39.43 0.00 0.00 12608.35 8996.31 23950.43 00:33:06.662 0 00:33:06.662 11:48:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:06.662 11:48:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:06.920 11:48:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:06.920 11:48:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:06.920 11:48:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:06.920 11:48:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:06.920 11:48:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:06.920 11:48:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.178 11:48:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:07.178 11:48:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:07.178 11:48:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:07.178 11:48:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.178 11:48:41 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:07.179 11:48:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:07.438 [2024-07-15 11:48:41.697536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:07.438 [2024-07-15 11:48:41.697873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab2c20 (107): Transport endpoint is not connected 00:33:07.438 [2024-07-15 11:48:41.698864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab2c20 (9): Bad file descriptor 00:33:07.438 [2024-07-15 11:48:41.699864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.438 [2024-07-15 11:48:41.699880] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:07.438 [2024-07-15 11:48:41.699891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.438 request: 00:33:07.438 { 00:33:07.438 "name": "nvme0", 00:33:07.438 "trtype": "tcp", 00:33:07.438 "traddr": "127.0.0.1", 00:33:07.438 "adrfam": "ipv4", 00:33:07.438 "trsvcid": "4420", 00:33:07.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.438 "prchk_reftag": false, 00:33:07.438 "prchk_guard": false, 00:33:07.438 "hdgst": false, 00:33:07.438 "ddgst": false, 00:33:07.438 "psk": ":spdk-test:key1", 00:33:07.438 "method": "bdev_nvme_attach_controller", 00:33:07.438 "req_id": 1 00:33:07.438 } 00:33:07.438 Got JSON-RPC error response 00:33:07.438 response: 00:33:07.438 { 00:33:07.438 "code": -5, 00:33:07.438 "message": "Input/output error" 00:33:07.438 } 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@33 -- # sn=710751301 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 710751301 00:33:07.438 1 links removed 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@33 -- # sn=573124211 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 573124211 00:33:07.438 1 links removed 00:33:07.438 11:48:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3017833 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3017833 ']' 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3017833 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017833 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017833' 00:33:07.438 killing process with pid 3017833 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@967 -- # kill 3017833 00:33:07.438 Received shutdown signal, test time was about 1.000000 seconds 00:33:07.438 00:33:07.438 Latency(us) 00:33:07.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.438 =================================================================================================================== 00:33:07.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.438 11:48:41 keyring_linux -- common/autotest_common.sh@972 -- # wait 3017833 00:33:07.698 11:48:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3017598 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3017598 ']' 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3017598 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017598 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017598' 00:33:07.698 killing process with pid 3017598 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@967 -- # kill 3017598 00:33:07.698 11:48:42 keyring_linux -- common/autotest_common.sh@972 -- # wait 3017598 00:33:07.963 00:33:07.963 real 0m5.480s 00:33:07.963 user 0m10.272s 00:33:07.963 sys 0m1.593s 00:33:07.963 11:48:42 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:07.963 11:48:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:07.963 ************************************ 00:33:07.963 END TEST keyring_linux 00:33:07.963 ************************************ 00:33:08.222 11:48:42 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.222 11:48:42 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:08.222 11:48:42 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:08.222 11:48:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:08.222 11:48:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:08.222 11:48:42 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:08.222 11:48:42 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:08.222 11:48:42 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:08.222 11:48:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:08.222 11:48:42 -- common/autotest_common.sh@10 -- # set +x 00:33:08.222 11:48:42 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:08.222 11:48:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:08.222 11:48:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:08.222 11:48:42 -- common/autotest_common.sh@10 -- # set +x 00:33:13.494 INFO: APP EXITING 00:33:13.494 INFO: killing all VMs 00:33:13.494 INFO: killing vhost app 00:33:13.494 WARN: no vhost pid file found 00:33:13.494 INFO: EXIT DONE 00:33:16.786 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:33:16.786 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:16.786 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:19.324 Cleaning 00:33:19.324 Removing: /var/run/dpdk/spdk0/config 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:19.324 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:19.324 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:19.324 Removing: /var/run/dpdk/spdk1/config 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:19.324 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:19.324 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:19.583 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:19.583 Removing: /var/run/dpdk/spdk2/config 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:19.583 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:19.583 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:19.583 Removing: /var/run/dpdk/spdk3/config 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:19.583 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:19.583 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:19.583 Removing: /var/run/dpdk/spdk4/config 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:19.583 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:19.583 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:19.583 Removing: /dev/shm/bdev_svc_trace.1 00:33:19.583 Removing: /dev/shm/nvmf_trace.0 00:33:19.583 Removing: /dev/shm/spdk_tgt_trace.pid2585703 00:33:19.583 Removing: /var/run/dpdk/spdk0 00:33:19.583 Removing: /var/run/dpdk/spdk1 00:33:19.583 Removing: /var/run/dpdk/spdk2 00:33:19.583 Removing: /var/run/dpdk/spdk3 00:33:19.583 Removing: /var/run/dpdk/spdk4 00:33:19.583 Removing: /var/run/dpdk/spdk_pid2583281 00:33:19.583 Removing: /var/run/dpdk/spdk_pid2584501 00:33:19.583 Removing: /var/run/dpdk/spdk_pid2585703 00:33:19.583 Removing: /var/run/dpdk/spdk_pid2586268 00:33:19.583 Removing: /var/run/dpdk/spdk_pid2587226 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2587505 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2588597 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2588845 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2589001 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2590935 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2592354 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2592670 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2592993 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2593339 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2593650 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2593933 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2594217 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2594523 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2595367 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2599494 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2599793 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2600093 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2600358 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2600916 00:33:19.584 Removing: /var/run/dpdk/spdk_pid2601178 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2601733 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2601742 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2602041 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2602298 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2602594 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2602610 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2603226 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2603510 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2603833 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2604189 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2604406 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2604474 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2604754 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2605039 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2605316 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2605604 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2605885 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2606170 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2606450 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2606729 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2607014 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2607294 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2607577 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2607858 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2608145 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2608423 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2608723 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2609012 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2609315 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2609655 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2609946 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2610256 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2610441 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2610776 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2614618 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2661858 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2666653 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2677708 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2683307 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2687794 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2688366 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2695263 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2702435 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2702443 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2703478 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2704336 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2705316 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2705909 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2706097 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2706362 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2706374 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2706465 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2707423 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2708463 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2709376 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2710043 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2710047 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2710316 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2711955 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2713075 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2722007 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2722293 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2726840 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2733391 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2736978 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2749037 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2758584 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2760618 00:33:19.843 Removing: /var/run/dpdk/spdk_pid2761594 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2779750 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2783706 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2820175 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2825392 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2827430 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2829451 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2829723 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2830002 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2830308 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2831167 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2833201 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2834583 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2835398 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2837797 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2838609 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2839437 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2843957 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2854522 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2858869 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2865393 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2866981 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2869112 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2873697 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2877998 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2885796 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2885799 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2890729 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2890888 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2891142 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2891660 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2891665 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2896474 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2897124 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2901776 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2904657 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2910519 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2916530 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2927002 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2934266 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2934268 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2954257 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2955026 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2955768 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2956456 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2957502 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2958296 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2959091 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2959741 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2964206 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2964475 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2971357 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2971664 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2973928 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2982263 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2982393 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2987688 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2989742 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2991981 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2993179 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2995414 00:33:20.103 Removing: /var/run/dpdk/spdk_pid2996626 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3005801 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3006323 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3006845 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3009422 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3009950 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3010633 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3014781 00:33:20.103 Removing: /var/run/dpdk/spdk_pid3014975 00:33:20.363 Removing: /var/run/dpdk/spdk_pid3017044 00:33:20.363 Removing: /var/run/dpdk/spdk_pid3017598 00:33:20.363 Removing: /var/run/dpdk/spdk_pid3017833 00:33:20.363 Clean 00:33:20.363 11:48:54 -- common/autotest_common.sh@1451 -- # return 0 00:33:20.363 11:48:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:20.363 11:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.363 11:48:54 -- common/autotest_common.sh@10 -- # set +x 00:33:20.363 11:48:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:20.363 11:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.363 11:48:54 -- common/autotest_common.sh@10 -- # set +x 00:33:20.363 11:48:54 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:20.363 11:48:54 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:20.363 11:48:54 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:20.363 11:48:54 -- spdk/autotest.sh@391 -- # hash lcov 00:33:20.363 11:48:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:20.363 11:48:54 -- spdk/autotest.sh@393 -- # hostname 00:33:20.363 11:48:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:20.622 geninfo: WARNING: invalid characters removed from testname! 00:33:52.817 11:49:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:53.076 11:49:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:56.361 11:49:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:58.894 11:49:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:02.182 11:49:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:04.712 11:49:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.996 11:49:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:07.996 11:49:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.996 11:49:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:07.996 11:49:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.996 11:49:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.996 11:49:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.996 11:49:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.996 11:49:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.996 11:49:41 -- paths/export.sh@5 -- $ export PATH 00:34:07.996 11:49:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.996 11:49:41 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:07.996 11:49:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:07.996 11:49:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721036981.XXXXXX 00:34:07.996 11:49:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721036981.eGtgwP 00:34:07.996 11:49:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:07.996 11:49:41 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:07.996 11:49:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:07.996 11:49:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:07.996 11:49:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:07.996 11:49:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:07.996 11:49:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:07.996 11:49:41 -- common/autotest_common.sh@10 -- $ set +x 00:34:07.996 11:49:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:07.996 11:49:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:07.996 11:49:41 -- pm/common@17 -- $ local monitor 00:34:07.996 11:49:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:07.996 11:49:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:07.996 11:49:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:07.996 11:49:41 -- pm/common@21 -- $ date +%s 00:34:07.996 11:49:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:07.996 11:49:41 -- pm/common@21 -- $ date +%s 00:34:07.996 11:49:41 -- pm/common@25 -- $ sleep 1 00:34:07.996 11:49:41 -- pm/common@21 -- $ date +%s 00:34:07.996 11:49:41 -- pm/common@21 -- $ date +%s 00:34:07.996 11:49:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036981 00:34:07.996 11:49:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036981 00:34:07.996 11:49:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036981 00:34:07.996 11:49:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036981 00:34:07.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036981_collect-vmstat.pm.log 00:34:07.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036981_collect-cpu-load.pm.log 00:34:07.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036981_collect-cpu-temp.pm.log 00:34:07.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036981_collect-bmc-pm.bmc.pm.log 00:34:08.564 11:49:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:08.564 11:49:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:08.564 11:49:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:08.564 11:49:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:08.564 11:49:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:08.564 11:49:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:08.564 11:49:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:08.564 11:49:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:08.564 11:49:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:08.564 11:49:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:08.564 11:49:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:08.564 11:49:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:08.564 11:49:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:08.564 11:49:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.564 11:49:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:08.564 11:49:42 -- pm/common@44 -- $ pid=3028971 00:34:08.564 11:49:42 -- pm/common@50 -- $ kill -TERM 3028971 00:34:08.564 11:49:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.564 11:49:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:08.564 11:49:42 -- pm/common@44 -- $ pid=3028973 00:34:08.564 11:49:42 -- pm/common@50 -- $ kill -TERM 3028973 00:34:08.564 11:49:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.564 11:49:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:08.564 11:49:42 -- pm/common@44 -- $ pid=3028974 00:34:08.564 11:49:42 -- pm/common@50 -- $ kill -TERM 3028974 00:34:08.564 11:49:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.564 11:49:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:08.564 11:49:42 -- pm/common@44 -- $ pid=3028997 00:34:08.564 11:49:42 -- pm/common@50 -- $ sudo -E kill -TERM 3028997 00:34:08.564 + [[ -n 2469631 ]] 00:34:08.564 + sudo kill 2469631 00:34:08.574 [Pipeline] } 00:34:08.591 [Pipeline] // stage 00:34:08.596 [Pipeline] } 00:34:08.611 [Pipeline] // timeout 00:34:08.614 [Pipeline] } 00:34:08.629 [Pipeline] // catchError 00:34:08.634 [Pipeline] } 00:34:08.650 [Pipeline] // wrap 00:34:08.655 [Pipeline] } 00:34:08.673 [Pipeline] // catchError 00:34:08.681 [Pipeline] stage 00:34:08.683 [Pipeline] { (Epilogue) 00:34:08.696 [Pipeline] catchError 00:34:08.697 [Pipeline] { 00:34:08.712 [Pipeline] echo 00:34:08.713 Cleanup processes 00:34:08.719 [Pipeline] sh 00:34:09.000 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.000 3029093 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:09.000 3029419 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.014 [Pipeline] sh 00:34:09.298 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.298 ++ grep -v 'sudo pgrep' 00:34:09.298 ++ awk '{print $1}' 00:34:09.298 + sudo kill -9 3029093 00:34:09.310 [Pipeline] sh 00:34:09.590 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:36.149 [Pipeline] sh 00:34:36.431 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:36.431 Artifacts sizes are good 00:34:36.447 [Pipeline] archiveArtifacts 00:34:36.454 Archiving artifacts 00:34:36.666 [Pipeline] sh 00:34:37.029 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:37.044 [Pipeline] cleanWs 00:34:37.055 [WS-CLEANUP] Deleting project workspace... 00:34:37.055 [WS-CLEANUP] Deferred wipeout is used... 00:34:37.061 [WS-CLEANUP] done 00:34:37.063 [Pipeline] } 00:34:37.078 [Pipeline] // catchError 00:34:37.090 [Pipeline] sh 00:34:37.368 + logger -p user.info -t JENKINS-CI 00:34:37.376 [Pipeline] } 00:34:37.392 [Pipeline] // stage 00:34:37.396 [Pipeline] } 00:34:37.412 [Pipeline] // node 00:34:37.417 [Pipeline] End of Pipeline 00:34:37.443 Finished: SUCCESS